00:00:00.000 Started by upstream project "autotest-per-patch" build number 127102 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.104 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.129 Fetching changes from the remote Git repository 00:00:00.132 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.156 Using shallow fetch with depth 1 00:00:00.156 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.156 > git --version # timeout=10 00:00:00.174 > git --version # 'git version 2.39.2' 00:00:00.174 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.193 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.193 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.055 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.066 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.081 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:06.081 > git config core.sparsecheckout # timeout=10 00:00:06.090 > git read-tree -mu HEAD # timeout=10 00:00:06.105 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.134 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.134 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.221 [Pipeline] Start of Pipeline 00:00:06.235 [Pipeline] library 00:00:06.236 Loading library shm_lib@master 00:00:06.236 Library shm_lib@master is cached. Copying from home. 00:00:06.255 [Pipeline] node 00:00:06.269 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.270 [Pipeline] { 00:00:06.279 [Pipeline] catchError 00:00:06.280 [Pipeline] { 00:00:06.291 [Pipeline] wrap 00:00:06.329 [Pipeline] { 00:00:06.367 [Pipeline] stage 00:00:06.369 [Pipeline] { (Prologue) 00:00:06.548 [Pipeline] sh 00:00:06.834 + logger -p user.info -t JENKINS-CI 00:00:06.848 [Pipeline] echo 00:00:06.850 Node: WFP8 00:00:06.856 [Pipeline] sh 00:00:07.154 [Pipeline] setCustomBuildProperty 00:00:07.164 [Pipeline] echo 00:00:07.165 Cleanup processes 00:00:07.170 [Pipeline] sh 00:00:07.455 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.455 2765417 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.469 [Pipeline] sh 00:00:07.759 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.759 ++ grep -v 'sudo pgrep' 00:00:07.759 ++ awk '{print $1}' 00:00:07.759 + sudo kill -9 00:00:07.759 + true 00:00:07.783 [Pipeline] cleanWs 00:00:07.803 [WS-CLEANUP] Deleting project workspace... 00:00:07.805 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.811 [WS-CLEANUP] done 00:00:07.814 [Pipeline] setCustomBuildProperty 00:00:07.824 [Pipeline] sh 00:00:08.103 + sudo git config --global --replace-all safe.directory '*' 00:00:08.186 [Pipeline] httpRequest 00:00:08.209 [Pipeline] echo 00:00:08.210 Sorcerer 10.211.164.101 is alive 00:00:08.218 [Pipeline] httpRequest 00:00:08.223 HttpMethod: GET 00:00:08.224 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.224 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.235 Response Code: HTTP/1.1 200 OK 00:00:08.236 Success: Status code 200 is in the accepted range: 200,404 00:00:08.236 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:12.543 [Pipeline] sh 00:00:12.826 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:12.842 [Pipeline] httpRequest 00:00:12.872 [Pipeline] echo 00:00:12.874 Sorcerer 10.211.164.101 is alive 00:00:12.883 [Pipeline] httpRequest 00:00:12.888 HttpMethod: GET 00:00:12.889 URL: http://10.211.164.101/packages/spdk_6b560eac9289da2877fdee1f619115b86660e7bc.tar.gz 00:00:12.889 Sending request to url: http://10.211.164.101/packages/spdk_6b560eac9289da2877fdee1f619115b86660e7bc.tar.gz 00:00:12.914 Response Code: HTTP/1.1 200 OK 00:00:12.914 Success: Status code 200 is in the accepted range: 200,404 00:00:12.915 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_6b560eac9289da2877fdee1f619115b86660e7bc.tar.gz 00:01:05.784 [Pipeline] sh 00:01:06.095 + tar --no-same-owner -xf spdk_6b560eac9289da2877fdee1f619115b86660e7bc.tar.gz 00:01:08.637 [Pipeline] sh 00:01:08.917 + git -C spdk log --oneline -n5 00:01:08.918 6b560eac9 scripts/setup: Make sure get_block_dev_from_nvme() doesn't trigger errexit 00:01:08.918 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:01:08.918 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:01:08.918 79fce488b test/scheduler: test scheduling period with dynamic scheduler 00:01:08.918 673f37314 ut/nvme_pcie: allocate nvme_pcie_qpair instead of spdk_nvme_qpair 00:01:08.928 [Pipeline] } 00:01:08.942 [Pipeline] // stage 00:01:08.950 [Pipeline] stage 00:01:08.952 [Pipeline] { (Prepare) 00:01:08.969 [Pipeline] writeFile 00:01:08.984 [Pipeline] sh 00:01:09.264 + logger -p user.info -t JENKINS-CI 00:01:09.275 [Pipeline] sh 00:01:09.553 + logger -p user.info -t JENKINS-CI 00:01:09.564 [Pipeline] sh 00:01:09.846 + cat autorun-spdk.conf 00:01:09.846 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.846 SPDK_TEST_NVMF=1 00:01:09.846 SPDK_TEST_NVME_CLI=1 00:01:09.846 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.846 SPDK_TEST_NVMF_NICS=e810 00:01:09.846 SPDK_TEST_VFIOUSER=1 00:01:09.846 SPDK_RUN_UBSAN=1 00:01:09.846 NET_TYPE=phy 00:01:09.853 RUN_NIGHTLY=0 00:01:09.858 [Pipeline] readFile 00:01:09.882 [Pipeline] withEnv 00:01:09.884 [Pipeline] { 00:01:09.901 [Pipeline] sh 00:01:10.182 + set -ex 00:01:10.182 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:10.182 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:10.182 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.182 ++ SPDK_TEST_NVMF=1 00:01:10.182 ++ SPDK_TEST_NVME_CLI=1 00:01:10.182 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.182 ++ SPDK_TEST_NVMF_NICS=e810 00:01:10.182 ++ SPDK_TEST_VFIOUSER=1 00:01:10.182 ++ SPDK_RUN_UBSAN=1 00:01:10.182 ++ NET_TYPE=phy 00:01:10.182 ++ RUN_NIGHTLY=0 00:01:10.182 + case $SPDK_TEST_NVMF_NICS in 00:01:10.182 + DRIVERS=ice 00:01:10.182 + [[ tcp == \r\d\m\a ]] 00:01:10.182 + [[ -n ice ]] 00:01:10.182 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:10.182 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:10.182 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:10.182 rmmod: ERROR: Module irdma is not currently loaded 00:01:10.182 rmmod: ERROR: Module i40iw is not currently loaded 00:01:10.182 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:10.182 + true 00:01:10.182 + for D in $DRIVERS 00:01:10.182 + sudo modprobe ice 00:01:10.182 + exit 0 00:01:10.189 [Pipeline] } 00:01:10.203 [Pipeline] // withEnv 00:01:10.208 [Pipeline] } 00:01:10.223 [Pipeline] // stage 00:01:10.231 [Pipeline] catchError 00:01:10.232 [Pipeline] { 00:01:10.246 [Pipeline] timeout 00:01:10.246 Timeout set to expire in 50 min 00:01:10.248 [Pipeline] { 00:01:10.262 [Pipeline] stage 00:01:10.263 [Pipeline] { (Tests) 00:01:10.273 [Pipeline] sh 00:01:10.551 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.551 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.551 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.551 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:10.551 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.551 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.551 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:10.551 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.551 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.551 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.551 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:10.551 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.551 + source /etc/os-release 00:01:10.551 ++ NAME='Fedora Linux' 00:01:10.551 ++ VERSION='38 (Cloud Edition)' 00:01:10.551 ++ ID=fedora 00:01:10.551 ++ VERSION_ID=38 00:01:10.551 ++ VERSION_CODENAME= 00:01:10.551 ++ PLATFORM_ID=platform:f38 00:01:10.551 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:10.551 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:10.551 ++ LOGO=fedora-logo-icon 00:01:10.551 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:10.551 ++ HOME_URL=https://fedoraproject.org/ 00:01:10.551 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:10.551 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:10.551 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:10.551 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:10.551 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:10.551 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:10.551 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:10.551 ++ SUPPORT_END=2024-05-14 00:01:10.551 ++ VARIANT='Cloud Edition' 00:01:10.551 ++ VARIANT_ID=cloud 00:01:10.551 + uname -a 00:01:10.551 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:10.551 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:13.082 Hugepages 00:01:13.082 node hugesize free / total 00:01:13.082 node0 1048576kB 0 / 0 00:01:13.082 node0 2048kB 0 / 0 00:01:13.082 node1 1048576kB 0 / 0 00:01:13.082 node1 2048kB 0 / 0 00:01:13.082 00:01:13.082 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:13.082 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:13.082 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:13.082 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:13.082 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:13.082 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:13.082 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:13.082 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:13.082 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:13.082 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:13.082 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:13.082 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:13.082 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:13.082 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:13.082 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:13.082 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:13.082 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:13.082 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:13.082 + rm -f /tmp/spdk-ld-path 00:01:13.082 + source autorun-spdk.conf 00:01:13.082 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.082 ++ SPDK_TEST_NVMF=1 00:01:13.082 ++ SPDK_TEST_NVME_CLI=1 00:01:13.082 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.082 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.082 ++ SPDK_TEST_VFIOUSER=1 00:01:13.082 ++ SPDK_RUN_UBSAN=1 00:01:13.082 ++ NET_TYPE=phy 00:01:13.082 ++ RUN_NIGHTLY=0 00:01:13.082 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:13.082 + [[ -n '' ]] 00:01:13.082 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.082 + for M in /var/spdk/build-*-manifest.txt 00:01:13.082 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:13.082 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.082 + for M in /var/spdk/build-*-manifest.txt 00:01:13.082 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:13.082 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.082 ++ uname 00:01:13.082 + [[ Linux == \L\i\n\u\x ]] 00:01:13.082 + sudo dmesg -T 00:01:13.082 + sudo dmesg --clear 00:01:13.082 + dmesg_pid=2766354 00:01:13.082 + [[ Fedora Linux == FreeBSD ]] 00:01:13.082 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.082 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.082 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:13.082 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:13.082 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:13.082 + [[ -x /usr/src/fio-static/fio ]] 00:01:13.082 + sudo dmesg -Tw 00:01:13.082 + export FIO_BIN=/usr/src/fio-static/fio 00:01:13.082 + FIO_BIN=/usr/src/fio-static/fio 00:01:13.082 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:13.082 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:13.082 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:13.082 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.082 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.082 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:13.082 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.082 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.082 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.082 Test configuration: 00:01:13.082 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.082 SPDK_TEST_NVMF=1 00:01:13.082 SPDK_TEST_NVME_CLI=1 00:01:13.082 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.082 SPDK_TEST_NVMF_NICS=e810 00:01:13.082 SPDK_TEST_VFIOUSER=1 00:01:13.082 SPDK_RUN_UBSAN=1 00:01:13.082 NET_TYPE=phy 00:01:13.082 RUN_NIGHTLY=0 21:26:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:13.082 21:26:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.082 21:26:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.082 21:26:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.082 21:26:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.082 21:26:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.082 21:26:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.082 21:26:21 -- paths/export.sh@5 -- $ export PATH 00:01:13.082 21:26:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.082 21:26:21 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:13.082 21:26:21 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:13.082 21:26:21 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721849181.XXXXXX 00:01:13.082 21:26:21 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721849181.jsyd6g 00:01:13.082 21:26:21 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:13.082 21:26:21 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:13.082 21:26:21 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:13.082 21:26:21 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:13.082 21:26:21 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.082 21:26:21 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:13.082 21:26:21 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:13.082 21:26:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.082 21:26:21 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:13.082 21:26:21 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:13.082 21:26:21 -- pm/common@17 -- $ local monitor 00:01:13.082 21:26:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.083 21:26:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.083 21:26:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.083 21:26:21 -- pm/common@21 -- $ date +%s 00:01:13.083 21:26:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.083 21:26:21 -- pm/common@25 -- $ sleep 1 00:01:13.083 21:26:21 -- pm/common@21 -- $ date +%s 00:01:13.083 21:26:21 -- pm/common@21 -- $ date +%s 00:01:13.083 21:26:21 -- pm/common@21 -- $ date +%s 00:01:13.083 21:26:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721849181 00:01:13.083 21:26:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721849181 00:01:13.083 21:26:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721849181 00:01:13.083 21:26:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721849181 00:01:13.083 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721849181_collect-cpu-load.pm.log 00:01:13.083 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721849181_collect-vmstat.pm.log 00:01:13.341 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721849181_collect-cpu-temp.pm.log 00:01:13.341 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721849181_collect-bmc-pm.bmc.pm.log 00:01:14.292 21:26:22 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:14.292 21:26:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.292 21:26:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.292 21:26:22 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.292 21:26:22 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.292 Wed Jul 24 07:26:22 PM UTC 2024 00:01:14.292 21:26:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.292 v24.09-pre-310-g6b560eac9 00:01:14.292 21:26:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:14.292 21:26:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.292 21:26:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.292 21:26:22 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:14.292 21:26:22 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:14.292 21:26:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.292 ************************************ 00:01:14.292 START TEST ubsan 00:01:14.292 ************************************ 00:01:14.292 21:26:22 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:14.292 using ubsan 00:01:14.292 00:01:14.292 real 0m0.000s 00:01:14.292 user 0m0.000s 00:01:14.292 sys 0m0.000s 00:01:14.292 21:26:22 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:14.292 21:26:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.292 ************************************ 00:01:14.292 END TEST ubsan 00:01:14.292 ************************************ 00:01:14.292 21:26:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:14.292 21:26:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.292 21:26:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.292 21:26:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:14.292 21:26:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.292 21:26:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:14.292 21:26:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:14.292 21:26:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:14.292 21:26:22 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:14.292 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:14.292 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:14.857 Using 'verbs' RDMA provider 00:01:27.664 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:37.670 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:38.236 Creating mk/config.mk...done. 00:01:38.236 Creating mk/cc.flags.mk...done. 00:01:38.236 Type 'make' to build. 00:01:38.236 21:26:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:38.236 21:26:46 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:38.236 21:26:46 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:38.236 21:26:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.236 ************************************ 00:01:38.236 START TEST make 00:01:38.236 ************************************ 00:01:38.236 21:26:46 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:38.494 make[1]: Nothing to be done for 'all'. 00:01:39.873 The Meson build system 00:01:39.873 Version: 1.3.1 00:01:39.873 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:39.873 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:39.873 Build type: native build 00:01:39.873 Project name: libvfio-user 00:01:39.873 Project version: 0.0.1 00:01:39.873 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:39.873 C linker for the host machine: cc ld.bfd 2.39-16 00:01:39.873 Host machine cpu family: x86_64 00:01:39.873 Host machine cpu: x86_64 00:01:39.873 Run-time dependency threads found: YES 00:01:39.873 Library dl found: YES 00:01:39.873 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:39.873 Run-time dependency json-c found: YES 0.17 00:01:39.873 Run-time dependency cmocka found: YES 1.1.7 00:01:39.873 Program pytest-3 found: NO 00:01:39.873 Program flake8 found: NO 00:01:39.873 Program misspell-fixer found: NO 00:01:39.873 Program restructuredtext-lint found: NO 00:01:39.873 Program valgrind found: YES (/usr/bin/valgrind) 00:01:39.873 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.873 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.873 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.873 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:39.873 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:39.873 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:39.873 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:39.873 Build targets in project: 8 00:01:39.873 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:39.873 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:39.873 00:01:39.873 libvfio-user 0.0.1 00:01:39.873 00:01:39.873 User defined options 00:01:39.873 buildtype : debug 00:01:39.873 default_library: shared 00:01:39.873 libdir : /usr/local/lib 00:01:39.873 00:01:39.873 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:40.439 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:40.439 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:40.439 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:40.439 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:40.439 [4/37] Compiling C object samples/null.p/null.c.o 00:01:40.439 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:40.439 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:40.439 [7/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:40.439 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:40.439 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:40.439 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:40.439 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:40.439 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:40.439 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:40.439 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:40.439 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:40.439 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:40.439 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:40.439 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:40.439 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:40.439 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:40.439 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:40.439 [22/37] Compiling C object samples/client.p/client.c.o 00:01:40.439 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:40.439 [24/37] Compiling C object samples/server.p/server.c.o 00:01:40.439 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:40.439 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:40.439 [27/37] Linking target samples/client 00:01:40.439 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:40.697 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:40.697 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:40.697 [31/37] Linking target test/unit_tests 00:01:40.697 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:40.697 [33/37] Linking target samples/gpio-pci-idio-16 00:01:40.697 [34/37] Linking target samples/server 00:01:40.697 [35/37] Linking target samples/null 00:01:40.697 [36/37] Linking target samples/lspci 00:01:40.697 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:40.697 INFO: autodetecting backend as ninja 00:01:40.697 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:40.954 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.212 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:41.212 ninja: no work to do. 00:01:46.481 The Meson build system 00:01:46.481 Version: 1.3.1 00:01:46.481 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:46.481 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:46.481 Build type: native build 00:01:46.481 Program cat found: YES (/usr/bin/cat) 00:01:46.481 Project name: DPDK 00:01:46.481 Project version: 24.03.0 00:01:46.481 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:46.481 C linker for the host machine: cc ld.bfd 2.39-16 00:01:46.481 Host machine cpu family: x86_64 00:01:46.481 Host machine cpu: x86_64 00:01:46.481 Message: ## Building in Developer Mode ## 00:01:46.481 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:46.481 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:46.481 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:46.481 Program python3 found: YES (/usr/bin/python3) 00:01:46.481 Program cat found: YES (/usr/bin/cat) 00:01:46.481 Compiler for C supports arguments -march=native: YES 00:01:46.481 Checking for size of "void *" : 8 00:01:46.481 Checking for size of "void *" : 8 (cached) 00:01:46.481 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:46.481 Library m found: YES 00:01:46.481 Library numa found: YES 00:01:46.481 Has header "numaif.h" : YES 00:01:46.481 Library fdt found: NO 00:01:46.481 Library execinfo found: NO 00:01:46.481 Has header "execinfo.h" : YES 00:01:46.481 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:46.481 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:46.481 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:46.481 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:46.481 Run-time dependency openssl found: YES 3.0.9 00:01:46.481 Run-time dependency libpcap found: YES 1.10.4 00:01:46.481 Has header "pcap.h" with dependency libpcap: YES 00:01:46.481 Compiler for C supports arguments -Wcast-qual: YES 00:01:46.481 Compiler for C supports arguments -Wdeprecated: YES 00:01:46.481 Compiler for C supports arguments -Wformat: YES 00:01:46.481 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:46.481 Compiler for C supports arguments -Wformat-security: NO 00:01:46.481 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.481 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:46.481 Compiler for C supports arguments -Wnested-externs: YES 00:01:46.481 Compiler for C supports arguments -Wold-style-definition: YES 00:01:46.481 Compiler for C supports arguments -Wpointer-arith: YES 00:01:46.482 Compiler for C supports arguments -Wsign-compare: YES 00:01:46.482 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:46.482 Compiler for C supports arguments -Wundef: YES 00:01:46.482 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.482 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:46.482 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:46.482 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.482 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:46.482 Program objdump found: YES (/usr/bin/objdump) 00:01:46.482 Compiler for C supports arguments -mavx512f: YES 00:01:46.482 Checking if "AVX512 checking" compiles: YES 00:01:46.482 Fetching value of define "__SSE4_2__" : 1 00:01:46.482 Fetching value of define "__AES__" : 1 00:01:46.482 Fetching value of define "__AVX__" : 1 00:01:46.482 Fetching value of define "__AVX2__" : 1 00:01:46.482 Fetching value of define "__AVX512BW__" : 1 00:01:46.482 Fetching value of define "__AVX512CD__" : 1 00:01:46.482 Fetching value of define "__AVX512DQ__" : 1 00:01:46.482 Fetching value of define "__AVX512F__" : 1 00:01:46.482 Fetching value of define "__AVX512VL__" : 1 00:01:46.482 Fetching value of define "__PCLMUL__" : 1 00:01:46.482 Fetching value of define "__RDRND__" : 1 00:01:46.482 Fetching value of define "__RDSEED__" : 1 00:01:46.482 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:46.482 Fetching value of define "__znver1__" : (undefined) 00:01:46.482 Fetching value of define "__znver2__" : (undefined) 00:01:46.482 Fetching value of define "__znver3__" : (undefined) 00:01:46.482 Fetching value of define "__znver4__" : (undefined) 00:01:46.482 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:46.482 Message: lib/log: Defining dependency "log" 00:01:46.482 Message: lib/kvargs: Defining dependency "kvargs" 00:01:46.482 Message: lib/telemetry: Defining dependency "telemetry" 00:01:46.482 Checking for function "getentropy" : NO 00:01:46.482 Message: lib/eal: Defining dependency "eal" 00:01:46.482 Message: lib/ring: Defining dependency "ring" 00:01:46.482 Message: lib/rcu: Defining dependency "rcu" 00:01:46.482 Message: lib/mempool: Defining dependency "mempool" 00:01:46.482 Message: lib/mbuf: Defining dependency "mbuf" 00:01:46.482 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:46.482 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.482 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.482 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:46.482 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:46.482 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:46.482 Compiler for C supports arguments -mpclmul: YES 00:01:46.482 Compiler for C supports arguments -maes: YES 00:01:46.482 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.482 Compiler for C supports arguments -mavx512bw: YES 00:01:46.482 Compiler for C supports arguments -mavx512dq: YES 00:01:46.482 Compiler for C supports arguments -mavx512vl: YES 00:01:46.482 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:46.482 Compiler for C supports arguments -mavx2: YES 00:01:46.482 Compiler for C supports arguments -mavx: YES 00:01:46.482 Message: lib/net: Defining dependency "net" 00:01:46.482 Message: lib/meter: Defining dependency "meter" 00:01:46.482 Message: lib/ethdev: Defining dependency "ethdev" 00:01:46.482 Message: lib/pci: Defining dependency "pci" 00:01:46.482 Message: lib/cmdline: Defining dependency "cmdline" 00:01:46.482 Message: lib/hash: Defining dependency "hash" 00:01:46.482 Message: lib/timer: Defining dependency "timer" 00:01:46.482 Message: lib/compressdev: Defining dependency "compressdev" 00:01:46.482 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:46.482 Message: lib/dmadev: Defining dependency "dmadev" 00:01:46.482 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:46.482 Message: lib/power: Defining dependency "power" 00:01:46.482 Message: lib/reorder: Defining dependency "reorder" 00:01:46.482 Message: lib/security: Defining dependency "security" 00:01:46.482 Has header "linux/userfaultfd.h" : YES 00:01:46.482 Has header "linux/vduse.h" : YES 00:01:46.482 Message: lib/vhost: Defining dependency "vhost" 00:01:46.482 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:46.482 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:46.482 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:46.482 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:46.482 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:46.482 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:46.482 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:46.482 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:46.482 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:46.482 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:46.482 Program doxygen found: YES (/usr/bin/doxygen) 00:01:46.482 Configuring doxy-api-html.conf using configuration 00:01:46.482 Configuring doxy-api-man.conf using configuration 00:01:46.482 Program mandb found: YES (/usr/bin/mandb) 00:01:46.482 Program sphinx-build found: NO 00:01:46.482 Configuring rte_build_config.h using configuration 00:01:46.482 Message: 00:01:46.482 ================= 00:01:46.482 Applications Enabled 00:01:46.482 ================= 00:01:46.482 00:01:46.482 apps: 00:01:46.482 00:01:46.482 00:01:46.482 Message: 00:01:46.482 ================= 00:01:46.482 Libraries Enabled 00:01:46.482 ================= 00:01:46.482 00:01:46.482 libs: 00:01:46.482 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:46.482 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:46.482 cryptodev, dmadev, power, reorder, security, vhost, 00:01:46.482 00:01:46.482 Message: 00:01:46.482 =============== 00:01:46.482 Drivers Enabled 00:01:46.482 =============== 00:01:46.482 00:01:46.482 common: 00:01:46.482 00:01:46.482 bus: 00:01:46.482 pci, vdev, 00:01:46.482 mempool: 00:01:46.482 ring, 00:01:46.482 dma: 00:01:46.482 00:01:46.482 net: 00:01:46.482 00:01:46.482 crypto: 00:01:46.482 00:01:46.482 compress: 00:01:46.482 00:01:46.482 vdpa: 00:01:46.482 00:01:46.482 00:01:46.482 Message: 00:01:46.482 ================= 00:01:46.482 Content Skipped 00:01:46.482 ================= 00:01:46.482 00:01:46.482 apps: 00:01:46.482 dumpcap: explicitly disabled via build config 00:01:46.482 graph: explicitly disabled via build config 00:01:46.482 pdump: explicitly disabled via build config 00:01:46.482 proc-info: explicitly disabled via build config 00:01:46.482 test-acl: explicitly disabled via build config 00:01:46.482 test-bbdev: explicitly disabled via build config 00:01:46.482 test-cmdline: explicitly disabled via build config 00:01:46.482 test-compress-perf: explicitly disabled via build config 00:01:46.482 test-crypto-perf: explicitly disabled via build config 00:01:46.482 test-dma-perf: explicitly disabled via build config 00:01:46.482 test-eventdev: explicitly disabled via build config 00:01:46.482 test-fib: explicitly disabled via build config 00:01:46.482 test-flow-perf: explicitly disabled via build config 00:01:46.482 test-gpudev: explicitly disabled via build config 00:01:46.482 test-mldev: explicitly disabled via build config 00:01:46.482 test-pipeline: explicitly disabled via build config 00:01:46.482 test-pmd: explicitly disabled via build config 00:01:46.482 test-regex: explicitly disabled via build config 00:01:46.482 test-sad: explicitly disabled via build config 00:01:46.482 test-security-perf: explicitly disabled via build config 00:01:46.482 00:01:46.482 libs: 00:01:46.482 argparse: explicitly disabled via build config 00:01:46.482 metrics: explicitly disabled via build config 00:01:46.482 acl: explicitly disabled via build config 00:01:46.482 bbdev: explicitly disabled via build config 00:01:46.482 bitratestats: explicitly disabled via build config 00:01:46.482 bpf: explicitly disabled via build config 00:01:46.482 cfgfile: explicitly disabled via build config 00:01:46.482 distributor: explicitly disabled via build config 00:01:46.482 efd: explicitly disabled via build config 00:01:46.482 eventdev: explicitly disabled via build config 00:01:46.482 dispatcher: explicitly disabled via build config 00:01:46.482 gpudev: explicitly disabled via build config 00:01:46.482 gro: explicitly disabled via build config 00:01:46.482 gso: explicitly disabled via build config 00:01:46.482 ip_frag: explicitly disabled via build config 00:01:46.482 jobstats: explicitly disabled via build config 00:01:46.482 latencystats: explicitly disabled via build config 00:01:46.482 lpm: explicitly disabled via build config 00:01:46.482 member: explicitly disabled via build config 00:01:46.482 pcapng: explicitly disabled via build config 00:01:46.482 rawdev: explicitly disabled via build config 00:01:46.482 regexdev: explicitly disabled via build config 00:01:46.482 mldev: explicitly disabled via build config 00:01:46.482 rib: explicitly disabled via build config 00:01:46.482 sched: explicitly disabled via build config 00:01:46.483 stack: explicitly disabled via build config 00:01:46.483 ipsec: explicitly disabled via build config 00:01:46.483 pdcp: explicitly disabled via build config 00:01:46.483 fib: explicitly disabled via build config 00:01:46.483 port: explicitly disabled via build config 00:01:46.483 pdump: explicitly disabled via build config 00:01:46.483 table: explicitly disabled via build config 00:01:46.483 pipeline: explicitly disabled via build config 00:01:46.483 graph: explicitly disabled via build config 00:01:46.483 node: explicitly disabled via build config 00:01:46.483 00:01:46.483 drivers: 00:01:46.483 common/cpt: not in enabled drivers build config 00:01:46.483 common/dpaax: not in enabled drivers build config 00:01:46.483 common/iavf: not in enabled drivers build config 00:01:46.483 common/idpf: not in enabled drivers build config 00:01:46.483 common/ionic: not in enabled drivers build config 00:01:46.483 common/mvep: not in enabled drivers build config 00:01:46.483 common/octeontx: not in enabled drivers build config 00:01:46.483 bus/auxiliary: not in enabled drivers build config 00:01:46.483 bus/cdx: not in enabled drivers build config 00:01:46.483 bus/dpaa: not in enabled drivers build config 00:01:46.483 bus/fslmc: not in enabled drivers build config 00:01:46.483 bus/ifpga: not in enabled drivers build config 00:01:46.483 bus/platform: not in enabled drivers build config 00:01:46.483 bus/uacce: not in enabled drivers build config 00:01:46.483 bus/vmbus: not in enabled drivers build config 00:01:46.483 common/cnxk: not in enabled drivers build config 00:01:46.483 common/mlx5: not in enabled drivers build config 00:01:46.483 common/nfp: not in enabled drivers build config 00:01:46.483 common/nitrox: not in enabled drivers build config 00:01:46.483 common/qat: not in enabled drivers build config 00:01:46.483 common/sfc_efx: not in enabled drivers build config 00:01:46.483 mempool/bucket: not in enabled drivers build config 00:01:46.483 mempool/cnxk: not in enabled drivers build config 00:01:46.483 mempool/dpaa: not in enabled drivers build config 00:01:46.483 mempool/dpaa2: not in enabled drivers build config 00:01:46.483 mempool/octeontx: not in enabled drivers build config 00:01:46.483 mempool/stack: not in enabled drivers build config 00:01:46.483 dma/cnxk: not in enabled drivers build config 00:01:46.483 dma/dpaa: not in enabled drivers build config 00:01:46.483 dma/dpaa2: not in enabled drivers build config 00:01:46.483 dma/hisilicon: not in enabled drivers build config 00:01:46.483 dma/idxd: not in enabled drivers build config 00:01:46.483 dma/ioat: not in enabled drivers build config 00:01:46.483 dma/skeleton: not in enabled drivers build config 00:01:46.483 net/af_packet: not in enabled drivers build config 00:01:46.483 net/af_xdp: not in enabled drivers build config 00:01:46.483 net/ark: not in enabled drivers build config 00:01:46.483 net/atlantic: not in enabled drivers build config 00:01:46.483 net/avp: not in enabled drivers build config 00:01:46.483 net/axgbe: not in enabled drivers build config 00:01:46.483 net/bnx2x: not in enabled drivers build config 00:01:46.483 net/bnxt: not in enabled drivers build config 00:01:46.483 net/bonding: not in enabled drivers build config 00:01:46.483 net/cnxk: not in enabled drivers build config 00:01:46.483 net/cpfl: not in enabled drivers build config 00:01:46.483 net/cxgbe: not in enabled drivers build config 00:01:46.483 net/dpaa: not in enabled drivers build config 00:01:46.483 net/dpaa2: not in enabled drivers build config 00:01:46.483 net/e1000: not in enabled drivers build config 00:01:46.483 net/ena: not in enabled drivers build config 00:01:46.483 net/enetc: not in enabled drivers build config 00:01:46.483 net/enetfec: not in enabled drivers build config 00:01:46.483 net/enic: not in enabled drivers build config 00:01:46.483 net/failsafe: not in enabled drivers build config 00:01:46.483 net/fm10k: not in enabled drivers build config 00:01:46.483 net/gve: not in enabled drivers build config 00:01:46.483 net/hinic: not in enabled drivers build config 00:01:46.483 net/hns3: not in enabled drivers build config 00:01:46.483 net/i40e: not in enabled drivers build config 00:01:46.483 net/iavf: not in enabled drivers build config 00:01:46.483 net/ice: not in enabled drivers build config 00:01:46.483 net/idpf: not in enabled drivers build config 00:01:46.483 net/igc: not in enabled drivers build config 00:01:46.483 net/ionic: not in enabled drivers build config 00:01:46.483 net/ipn3ke: not in enabled drivers build config 00:01:46.483 net/ixgbe: not in enabled drivers build config 00:01:46.483 net/mana: not in enabled drivers build config 00:01:46.483 net/memif: not in enabled drivers build config 00:01:46.483 net/mlx4: not in enabled drivers build config 00:01:46.483 net/mlx5: not in enabled drivers build config 00:01:46.483 net/mvneta: not in enabled drivers build config 00:01:46.483 net/mvpp2: not in enabled drivers build config 00:01:46.483 net/netvsc: not in enabled drivers build config 00:01:46.483 net/nfb: not in enabled drivers build config 00:01:46.483 net/nfp: not in enabled drivers build config 00:01:46.483 net/ngbe: not in enabled drivers build config 00:01:46.483 net/null: not in enabled drivers build config 00:01:46.483 net/octeontx: not in enabled drivers build config 00:01:46.483 net/octeon_ep: not in enabled drivers build config 00:01:46.483 net/pcap: not in enabled drivers build config 00:01:46.483 net/pfe: not in enabled drivers build config 00:01:46.483 net/qede: not in enabled drivers build config 00:01:46.483 net/ring: not in enabled drivers build config 00:01:46.483 net/sfc: not in enabled drivers build config 00:01:46.483 net/softnic: not in enabled drivers build config 00:01:46.483 net/tap: not in enabled drivers build config 00:01:46.483 net/thunderx: not in enabled drivers build config 00:01:46.483 net/txgbe: not in enabled drivers build config 00:01:46.483 net/vdev_netvsc: not in enabled drivers build config 00:01:46.483 net/vhost: not in enabled drivers build config 00:01:46.483 net/virtio: not in enabled drivers build config 00:01:46.483 net/vmxnet3: not in enabled drivers build config 00:01:46.483 raw/*: missing internal dependency, "rawdev" 00:01:46.483 crypto/armv8: not in enabled drivers build config 00:01:46.483 crypto/bcmfs: not in enabled drivers build config 00:01:46.483 crypto/caam_jr: not in enabled drivers build config 00:01:46.483 crypto/ccp: not in enabled drivers build config 00:01:46.483 crypto/cnxk: not in enabled drivers build config 00:01:46.483 crypto/dpaa_sec: not in enabled drivers build config 00:01:46.483 crypto/dpaa2_sec: not in enabled drivers build config 00:01:46.483 crypto/ipsec_mb: not in enabled drivers build config 00:01:46.483 crypto/mlx5: not in enabled drivers build config 00:01:46.483 crypto/mvsam: not in enabled drivers build config 00:01:46.483 crypto/nitrox: not in enabled drivers build config 00:01:46.483 crypto/null: not in enabled drivers build config 00:01:46.483 crypto/octeontx: not in enabled drivers build config 00:01:46.483 crypto/openssl: not in enabled drivers build config 00:01:46.483 crypto/scheduler: not in enabled drivers build config 00:01:46.483 crypto/uadk: not in enabled drivers build config 00:01:46.483 crypto/virtio: not in enabled drivers build config 00:01:46.483 compress/isal: not in enabled drivers build config 00:01:46.483 compress/mlx5: not in enabled drivers build config 00:01:46.483 compress/nitrox: not in enabled drivers build config 00:01:46.483 compress/octeontx: not in enabled drivers build config 00:01:46.483 compress/zlib: not in enabled drivers build config 00:01:46.483 regex/*: missing internal dependency, "regexdev" 00:01:46.483 ml/*: missing internal dependency, "mldev" 00:01:46.483 vdpa/ifc: not in enabled drivers build config 00:01:46.483 vdpa/mlx5: not in enabled drivers build config 00:01:46.483 vdpa/nfp: not in enabled drivers build config 00:01:46.483 vdpa/sfc: not in enabled drivers build config 00:01:46.483 event/*: missing internal dependency, "eventdev" 00:01:46.483 baseband/*: missing internal dependency, "bbdev" 00:01:46.483 gpu/*: missing internal dependency, "gpudev" 00:01:46.483 00:01:46.483 00:01:46.483 Build targets in project: 85 00:01:46.483 00:01:46.483 DPDK 24.03.0 00:01:46.483 00:01:46.483 User defined options 00:01:46.483 buildtype : debug 00:01:46.483 default_library : shared 00:01:46.483 libdir : lib 00:01:46.483 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:46.483 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:46.483 c_link_args : 00:01:46.483 cpu_instruction_set: native 00:01:46.483 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:46.483 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:46.483 enable_docs : false 00:01:46.483 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:46.483 enable_kmods : false 00:01:46.483 max_lcores : 128 00:01:46.483 tests : false 00:01:46.483 00:01:46.483 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.752 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:46.752 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:47.015 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:47.015 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:47.015 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:47.015 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:47.015 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.015 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:47.015 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:47.015 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:47.015 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:47.015 [11/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:47.015 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:47.015 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:47.015 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:47.015 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.015 [16/268] Linking static target lib/librte_kvargs.a 00:01:47.015 [17/268] Linking static target lib/librte_log.a 00:01:47.015 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.015 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:47.015 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:47.015 [21/268] Linking static target lib/librte_pci.a 00:01:47.015 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:47.275 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:47.275 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:47.275 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:47.275 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:47.275 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:47.275 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:47.275 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:47.275 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:47.275 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:47.275 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.275 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:47.275 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:47.275 [35/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:47.275 [36/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:47.275 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.275 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:47.275 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:47.275 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:47.275 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:47.275 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:47.275 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.275 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:47.275 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.275 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:47.275 [47/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:47.275 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.534 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:47.534 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:47.534 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:47.534 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:47.534 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:47.534 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:47.534 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:47.534 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:47.534 [57/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:47.534 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:47.534 [59/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:47.534 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:47.534 [61/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:47.534 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.534 [63/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:47.534 [64/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:47.534 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:47.534 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:47.534 [67/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:47.534 [68/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:47.534 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:47.534 [70/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:47.534 [71/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:47.534 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.534 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:47.534 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:47.534 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:47.534 [76/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:47.534 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:47.534 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:47.534 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:47.534 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:47.534 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:47.534 [82/268] Linking static target lib/librte_meter.a 00:01:47.534 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:47.534 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:47.534 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:47.534 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:47.534 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:47.534 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:47.534 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:47.534 [90/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.534 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.534 [92/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:47.534 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:47.534 [94/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:47.534 [95/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:47.534 [96/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:47.534 [97/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:47.534 [98/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.534 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:47.534 [100/268] Linking static target lib/librte_ring.a 00:01:47.534 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:47.534 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:47.534 [103/268] Linking static target lib/librte_telemetry.a 00:01:47.534 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:47.534 [105/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:47.535 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:47.535 [107/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:47.535 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:47.535 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:47.535 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:47.535 [111/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:47.535 [112/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:47.535 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:47.535 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:47.535 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:47.535 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.535 [117/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:47.535 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:47.535 [119/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:47.535 [120/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:47.535 [121/268] Linking static target lib/librte_mempool.a 00:01:47.535 [122/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:47.535 [123/268] Linking static target lib/librte_net.a 00:01:47.535 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:47.792 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:47.792 [126/268] Linking static target lib/librte_rcu.a 00:01:47.792 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:47.792 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:47.792 [129/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:47.792 [130/268] Linking static target lib/librte_cmdline.a 00:01:47.792 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:47.792 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:47.792 [133/268] Linking static target lib/librte_eal.a 00:01:47.792 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:47.792 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.792 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:47.792 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.792 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:47.792 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:47.792 [140/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:47.792 [141/268] Linking static target lib/librte_mbuf.a 00:01:47.792 [142/268] Linking target lib/librte_log.so.24.1 00:01:47.793 [143/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:47.793 [144/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.793 [145/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:47.793 [146/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:47.793 [147/268] Linking static target lib/librte_timer.a 00:01:47.793 [148/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:47.793 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:47.793 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:47.793 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:47.793 [152/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:47.793 [153/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:47.793 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:47.793 [155/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.793 [156/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.793 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:47.793 [158/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:48.050 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.050 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.050 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:48.050 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:48.050 [163/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:48.050 [164/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:48.050 [165/268] Linking target lib/librte_kvargs.so.24.1 00:01:48.050 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:48.050 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:48.050 [168/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:48.050 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:48.050 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:48.050 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:48.050 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:48.050 [173/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.050 [174/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:48.050 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:48.050 [176/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:48.050 [177/268] Linking static target lib/librte_security.a 00:01:48.050 [178/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.050 [179/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:48.050 [180/268] Linking target lib/librte_telemetry.so.24.1 00:01:48.050 [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:48.050 [182/268] Linking static target lib/librte_compressdev.a 00:01:48.050 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:48.050 [184/268] Linking static target lib/librte_dmadev.a 00:01:48.050 [185/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:48.050 [186/268] Linking static target lib/librte_power.a 00:01:48.050 [187/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:48.050 [188/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.050 [189/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:48.050 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:48.050 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.050 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.050 [193/268] Linking static target lib/librte_reorder.a 00:01:48.050 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:48.050 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:48.051 [196/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:48.051 [197/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:48.051 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.051 [199/268] Linking static target lib/librte_hash.a 00:01:48.051 [200/268] Linking static target drivers/librte_bus_vdev.a 00:01:48.051 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.308 [202/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:48.308 [203/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.308 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:48.308 [205/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.308 [206/268] Linking static target drivers/librte_mempool_ring.a 00:01:48.308 [207/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.308 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:48.308 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.308 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.308 [211/268] Linking static target drivers/librte_bus_pci.a 00:01:48.308 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.308 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:48.308 [214/268] Linking static target lib/librte_cryptodev.a 00:01:48.567 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.567 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.567 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.567 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.567 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.567 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.825 [221/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.825 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:48.825 [223/268] Linking static target lib/librte_ethdev.a 00:01:48.825 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:48.825 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.084 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.084 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.650 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:49.909 [229/268] Linking static target lib/librte_vhost.a 00:01:50.168 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.545 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.813 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.381 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.639 [234/268] Linking target lib/librte_eal.so.24.1 00:01:57.639 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:57.639 [236/268] Linking target lib/librte_ring.so.24.1 00:01:57.639 [237/268] Linking target lib/librte_meter.so.24.1 00:01:57.639 [238/268] Linking target lib/librte_timer.so.24.1 00:01:57.639 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:57.639 [240/268] Linking target lib/librte_pci.so.24.1 00:01:57.639 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:57.897 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:57.897 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:57.897 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:57.897 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:57.897 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:57.897 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:57.897 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:57.897 [249/268] Linking target lib/librte_rcu.so.24.1 00:01:57.897 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:57.898 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:58.156 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:58.156 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:58.156 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:58.156 [255/268] Linking target lib/librte_compressdev.so.24.1 00:01:58.156 [256/268] Linking target lib/librte_reorder.so.24.1 00:01:58.156 [257/268] Linking target lib/librte_net.so.24.1 00:01:58.156 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:58.414 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:58.414 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:58.414 [261/268] Linking target lib/librte_cmdline.so.24.1 00:01:58.414 [262/268] Linking target lib/librte_hash.so.24.1 00:01:58.414 [263/268] Linking target lib/librte_security.so.24.1 00:01:58.414 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:58.672 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:58.672 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:58.672 [267/268] Linking target lib/librte_power.so.24.1 00:01:58.672 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:58.672 INFO: autodetecting backend as ninja 00:01:58.672 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:59.621 CC lib/ut_mock/mock.o 00:01:59.621 CC lib/log/log.o 00:01:59.621 CC lib/log/log_deprecated.o 00:01:59.621 CC lib/log/log_flags.o 00:01:59.621 CC lib/ut/ut.o 00:01:59.926 LIB libspdk_ut_mock.a 00:01:59.926 SO libspdk_ut_mock.so.6.0 00:01:59.926 LIB libspdk_log.a 00:01:59.926 LIB libspdk_ut.a 00:01:59.926 SO libspdk_log.so.7.0 00:01:59.926 SO libspdk_ut.so.2.0 00:01:59.926 SYMLINK libspdk_ut_mock.so 00:01:59.926 SYMLINK libspdk_log.so 00:01:59.926 SYMLINK libspdk_ut.so 00:02:00.192 CC lib/dma/dma.o 00:02:00.192 CC lib/util/base64.o 00:02:00.192 CC lib/util/bit_array.o 00:02:00.192 CC lib/util/cpuset.o 00:02:00.192 CC lib/util/crc16.o 00:02:00.192 CC lib/util/crc32c.o 00:02:00.192 CC lib/util/crc32.o 00:02:00.192 CC lib/util/crc64.o 00:02:00.192 CC lib/util/crc32_ieee.o 00:02:00.192 CC lib/util/dif.o 00:02:00.192 CC lib/util/fd.o 00:02:00.192 CC lib/util/fd_group.o 00:02:00.192 CC lib/util/file.o 00:02:00.192 CC lib/util/hexlify.o 00:02:00.192 CC lib/util/iov.o 00:02:00.192 CC lib/util/math.o 00:02:00.192 CXX lib/trace_parser/trace.o 00:02:00.192 CC lib/util/net.o 00:02:00.192 CC lib/util/pipe.o 00:02:00.192 CC lib/util/string.o 00:02:00.192 CC lib/util/strerror_tls.o 00:02:00.192 CC lib/util/uuid.o 00:02:00.192 CC lib/util/xor.o 00:02:00.192 CC lib/util/zipf.o 00:02:00.192 CC lib/ioat/ioat.o 00:02:00.449 LIB libspdk_dma.a 00:02:00.449 CC lib/vfio_user/host/vfio_user_pci.o 00:02:00.449 CC lib/vfio_user/host/vfio_user.o 00:02:00.449 SO libspdk_dma.so.4.0 00:02:00.449 SYMLINK libspdk_dma.so 00:02:00.449 LIB libspdk_ioat.a 00:02:00.449 SO libspdk_ioat.so.7.0 00:02:00.449 SYMLINK libspdk_ioat.so 00:02:00.707 LIB libspdk_vfio_user.a 00:02:00.707 LIB libspdk_util.a 00:02:00.707 SO libspdk_vfio_user.so.5.0 00:02:00.707 SO libspdk_util.so.10.0 00:02:00.707 SYMLINK libspdk_vfio_user.so 00:02:00.707 SYMLINK libspdk_util.so 00:02:00.964 LIB libspdk_trace_parser.a 00:02:00.964 SO libspdk_trace_parser.so.5.0 00:02:00.964 SYMLINK libspdk_trace_parser.so 00:02:00.964 CC lib/vmd/vmd.o 00:02:00.964 CC lib/idxd/idxd.o 00:02:00.964 CC lib/idxd/idxd_user.o 00:02:00.964 CC lib/idxd/idxd_kernel.o 00:02:00.964 CC lib/vmd/led.o 00:02:00.964 CC lib/conf/conf.o 00:02:00.964 CC lib/env_dpdk/env.o 00:02:00.964 CC lib/env_dpdk/pci.o 00:02:00.964 CC lib/env_dpdk/memory.o 00:02:00.964 CC lib/env_dpdk/init.o 00:02:00.964 CC lib/env_dpdk/threads.o 00:02:00.964 CC lib/env_dpdk/pci_ioat.o 00:02:01.222 CC lib/env_dpdk/pci_vmd.o 00:02:01.222 CC lib/env_dpdk/pci_virtio.o 00:02:01.222 CC lib/env_dpdk/pci_event.o 00:02:01.222 CC lib/env_dpdk/pci_idxd.o 00:02:01.222 CC lib/env_dpdk/sigbus_handler.o 00:02:01.222 CC lib/env_dpdk/pci_dpdk.o 00:02:01.222 CC lib/rdma_utils/rdma_utils.o 00:02:01.222 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:01.222 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:01.222 CC lib/json/json_parse.o 00:02:01.222 CC lib/json/json_util.o 00:02:01.222 CC lib/json/json_write.o 00:02:01.222 CC lib/rdma_provider/common.o 00:02:01.222 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:01.222 LIB libspdk_rdma_provider.a 00:02:01.222 LIB libspdk_conf.a 00:02:01.222 SO libspdk_rdma_provider.so.6.0 00:02:01.222 SO libspdk_conf.so.6.0 00:02:01.481 LIB libspdk_json.a 00:02:01.481 LIB libspdk_rdma_utils.a 00:02:01.481 SYMLINK libspdk_conf.so 00:02:01.481 SYMLINK libspdk_rdma_provider.so 00:02:01.481 SO libspdk_json.so.6.0 00:02:01.481 SO libspdk_rdma_utils.so.1.0 00:02:01.481 SYMLINK libspdk_rdma_utils.so 00:02:01.481 SYMLINK libspdk_json.so 00:02:01.481 LIB libspdk_idxd.a 00:02:01.481 SO libspdk_idxd.so.12.0 00:02:01.481 LIB libspdk_vmd.a 00:02:01.739 SO libspdk_vmd.so.6.0 00:02:01.739 SYMLINK libspdk_idxd.so 00:02:01.739 SYMLINK libspdk_vmd.so 00:02:01.739 CC lib/jsonrpc/jsonrpc_server.o 00:02:01.739 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:01.739 CC lib/jsonrpc/jsonrpc_client.o 00:02:01.739 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:01.999 LIB libspdk_jsonrpc.a 00:02:01.999 SO libspdk_jsonrpc.so.6.0 00:02:01.999 SYMLINK libspdk_jsonrpc.so 00:02:01.999 LIB libspdk_env_dpdk.a 00:02:02.259 SO libspdk_env_dpdk.so.15.0 00:02:02.259 SYMLINK libspdk_env_dpdk.so 00:02:02.259 CC lib/rpc/rpc.o 00:02:02.519 LIB libspdk_rpc.a 00:02:02.519 SO libspdk_rpc.so.6.0 00:02:02.519 SYMLINK libspdk_rpc.so 00:02:02.779 CC lib/trace/trace.o 00:02:02.779 CC lib/trace/trace_flags.o 00:02:02.779 CC lib/trace/trace_rpc.o 00:02:02.779 CC lib/notify/notify.o 00:02:02.779 CC lib/notify/notify_rpc.o 00:02:02.779 CC lib/keyring/keyring.o 00:02:02.779 CC lib/keyring/keyring_rpc.o 00:02:03.039 LIB libspdk_trace.a 00:02:03.039 LIB libspdk_notify.a 00:02:03.039 SO libspdk_trace.so.10.0 00:02:03.039 SO libspdk_notify.so.6.0 00:02:03.039 LIB libspdk_keyring.a 00:02:03.039 SYMLINK libspdk_trace.so 00:02:03.039 SO libspdk_keyring.so.1.0 00:02:03.039 SYMLINK libspdk_notify.so 00:02:03.299 SYMLINK libspdk_keyring.so 00:02:03.299 CC lib/thread/thread.o 00:02:03.299 CC lib/thread/iobuf.o 00:02:03.558 CC lib/sock/sock.o 00:02:03.558 CC lib/sock/sock_rpc.o 00:02:03.819 LIB libspdk_sock.a 00:02:03.819 SO libspdk_sock.so.10.0 00:02:03.819 SYMLINK libspdk_sock.so 00:02:04.079 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:04.079 CC lib/nvme/nvme_ctrlr.o 00:02:04.079 CC lib/nvme/nvme_fabric.o 00:02:04.079 CC lib/nvme/nvme_ns_cmd.o 00:02:04.079 CC lib/nvme/nvme_ns.o 00:02:04.079 CC lib/nvme/nvme_pcie_common.o 00:02:04.079 CC lib/nvme/nvme.o 00:02:04.079 CC lib/nvme/nvme_pcie.o 00:02:04.079 CC lib/nvme/nvme_qpair.o 00:02:04.079 CC lib/nvme/nvme_quirks.o 00:02:04.079 CC lib/nvme/nvme_transport.o 00:02:04.079 CC lib/nvme/nvme_discovery.o 00:02:04.079 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:04.079 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:04.079 CC lib/nvme/nvme_tcp.o 00:02:04.079 CC lib/nvme/nvme_opal.o 00:02:04.079 CC lib/nvme/nvme_io_msg.o 00:02:04.079 CC lib/nvme/nvme_poll_group.o 00:02:04.079 CC lib/nvme/nvme_zns.o 00:02:04.079 CC lib/nvme/nvme_stubs.o 00:02:04.079 CC lib/nvme/nvme_auth.o 00:02:04.079 CC lib/nvme/nvme_cuse.o 00:02:04.079 CC lib/nvme/nvme_vfio_user.o 00:02:04.079 CC lib/nvme/nvme_rdma.o 00:02:04.339 LIB libspdk_thread.a 00:02:04.598 SO libspdk_thread.so.10.1 00:02:04.599 SYMLINK libspdk_thread.so 00:02:04.857 CC lib/accel/accel.o 00:02:04.857 CC lib/accel/accel_rpc.o 00:02:04.857 CC lib/vfu_tgt/tgt_endpoint.o 00:02:04.857 CC lib/accel/accel_sw.o 00:02:04.857 CC lib/vfu_tgt/tgt_rpc.o 00:02:04.857 CC lib/virtio/virtio.o 00:02:04.857 CC lib/virtio/virtio_vhost_user.o 00:02:04.857 CC lib/virtio/virtio_vfio_user.o 00:02:04.857 CC lib/virtio/virtio_pci.o 00:02:04.857 CC lib/blob/blobstore.o 00:02:04.857 CC lib/blob/request.o 00:02:04.857 CC lib/blob/zeroes.o 00:02:04.857 CC lib/blob/blob_bs_dev.o 00:02:04.857 CC lib/init/json_config.o 00:02:04.857 CC lib/init/rpc.o 00:02:04.857 CC lib/init/subsystem.o 00:02:04.857 CC lib/init/subsystem_rpc.o 00:02:05.117 LIB libspdk_init.a 00:02:05.117 SO libspdk_init.so.5.0 00:02:05.117 LIB libspdk_virtio.a 00:02:05.117 LIB libspdk_vfu_tgt.a 00:02:05.117 SYMLINK libspdk_init.so 00:02:05.117 SO libspdk_vfu_tgt.so.3.0 00:02:05.117 SO libspdk_virtio.so.7.0 00:02:05.117 SYMLINK libspdk_vfu_tgt.so 00:02:05.117 SYMLINK libspdk_virtio.so 00:02:05.384 CC lib/event/app.o 00:02:05.385 CC lib/event/reactor.o 00:02:05.385 CC lib/event/app_rpc.o 00:02:05.385 CC lib/event/log_rpc.o 00:02:05.385 CC lib/event/scheduler_static.o 00:02:05.650 LIB libspdk_accel.a 00:02:05.650 SO libspdk_accel.so.16.0 00:02:05.650 SYMLINK libspdk_accel.so 00:02:05.650 LIB libspdk_nvme.a 00:02:05.650 LIB libspdk_event.a 00:02:05.910 SO libspdk_event.so.14.0 00:02:05.910 SO libspdk_nvme.so.13.1 00:02:05.910 SYMLINK libspdk_event.so 00:02:05.910 CC lib/bdev/bdev.o 00:02:05.910 CC lib/bdev/bdev_rpc.o 00:02:05.910 CC lib/bdev/bdev_zone.o 00:02:05.910 CC lib/bdev/part.o 00:02:05.910 CC lib/bdev/scsi_nvme.o 00:02:06.169 SYMLINK libspdk_nvme.so 00:02:07.108 LIB libspdk_blob.a 00:02:07.108 SO libspdk_blob.so.11.0 00:02:07.108 SYMLINK libspdk_blob.so 00:02:07.367 CC lib/blobfs/blobfs.o 00:02:07.367 CC lib/blobfs/tree.o 00:02:07.367 CC lib/lvol/lvol.o 00:02:07.627 LIB libspdk_bdev.a 00:02:07.887 SO libspdk_bdev.so.16.0 00:02:07.887 SYMLINK libspdk_bdev.so 00:02:07.887 LIB libspdk_blobfs.a 00:02:07.887 LIB libspdk_lvol.a 00:02:07.887 SO libspdk_blobfs.so.10.0 00:02:07.887 SO libspdk_lvol.so.10.0 00:02:07.887 SYMLINK libspdk_blobfs.so 00:02:08.145 SYMLINK libspdk_lvol.so 00:02:08.145 CC lib/ftl/ftl_core.o 00:02:08.145 CC lib/ftl/ftl_init.o 00:02:08.145 CC lib/ftl/ftl_layout.o 00:02:08.145 CC lib/ftl/ftl_debug.o 00:02:08.145 CC lib/ftl/ftl_io.o 00:02:08.145 CC lib/ftl/ftl_sb.o 00:02:08.145 CC lib/ftl/ftl_l2p.o 00:02:08.145 CC lib/ftl/ftl_l2p_flat.o 00:02:08.145 CC lib/ublk/ublk.o 00:02:08.145 CC lib/ftl/ftl_nv_cache.o 00:02:08.145 CC lib/ublk/ublk_rpc.o 00:02:08.145 CC lib/ftl/ftl_band.o 00:02:08.145 CC lib/ftl/ftl_writer.o 00:02:08.145 CC lib/ftl/ftl_band_ops.o 00:02:08.145 CC lib/ftl/ftl_rq.o 00:02:08.145 CC lib/ftl/ftl_reloc.o 00:02:08.145 CC lib/ftl/ftl_l2p_cache.o 00:02:08.145 CC lib/ftl/ftl_p2l.o 00:02:08.145 CC lib/ftl/mngt/ftl_mngt.o 00:02:08.145 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:08.145 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:08.145 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:08.145 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:08.145 CC lib/nvmf/ctrlr.o 00:02:08.145 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:08.145 CC lib/nvmf/ctrlr_discovery.o 00:02:08.145 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:08.145 CC lib/nvmf/ctrlr_bdev.o 00:02:08.145 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:08.145 CC lib/nvmf/subsystem.o 00:02:08.145 CC lib/nvmf/nvmf_rpc.o 00:02:08.145 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:08.145 CC lib/nvmf/nvmf.o 00:02:08.145 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:08.145 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:08.145 CC lib/nvmf/transport.o 00:02:08.145 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:08.145 CC lib/nvmf/mdns_server.o 00:02:08.145 CC lib/nvmf/tcp.o 00:02:08.145 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:08.145 CC lib/nvmf/stubs.o 00:02:08.145 CC lib/ftl/utils/ftl_conf.o 00:02:08.145 CC lib/nvmf/vfio_user.o 00:02:08.145 CC lib/ftl/utils/ftl_md.o 00:02:08.145 CC lib/ftl/utils/ftl_mempool.o 00:02:08.145 CC lib/nvmf/rdma.o 00:02:08.146 CC lib/nvmf/auth.o 00:02:08.146 CC lib/ftl/utils/ftl_bitmap.o 00:02:08.146 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:08.146 CC lib/ftl/utils/ftl_property.o 00:02:08.146 CC lib/nbd/nbd.o 00:02:08.146 CC lib/nbd/nbd_rpc.o 00:02:08.146 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:08.146 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:08.146 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:08.146 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:08.146 CC lib/scsi/port.o 00:02:08.146 CC lib/scsi/dev.o 00:02:08.146 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:08.146 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:08.146 CC lib/scsi/lun.o 00:02:08.146 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:08.146 CC lib/scsi/scsi_bdev.o 00:02:08.146 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:08.146 CC lib/scsi/scsi_pr.o 00:02:08.146 CC lib/scsi/scsi.o 00:02:08.146 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:08.146 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:08.146 CC lib/scsi/task.o 00:02:08.146 CC lib/scsi/scsi_rpc.o 00:02:08.146 CC lib/ftl/base/ftl_base_bdev.o 00:02:08.146 CC lib/ftl/base/ftl_base_dev.o 00:02:08.146 CC lib/ftl/ftl_trace.o 00:02:08.712 LIB libspdk_nbd.a 00:02:08.712 SO libspdk_nbd.so.7.0 00:02:08.712 LIB libspdk_scsi.a 00:02:08.712 LIB libspdk_ublk.a 00:02:08.712 SO libspdk_scsi.so.9.0 00:02:08.972 SYMLINK libspdk_nbd.so 00:02:08.972 SO libspdk_ublk.so.3.0 00:02:08.972 SYMLINK libspdk_ublk.so 00:02:08.972 SYMLINK libspdk_scsi.so 00:02:08.972 LIB libspdk_ftl.a 00:02:09.232 SO libspdk_ftl.so.9.0 00:02:09.232 CC lib/vhost/vhost.o 00:02:09.232 CC lib/vhost/vhost_rpc.o 00:02:09.232 CC lib/vhost/vhost_scsi.o 00:02:09.232 CC lib/vhost/vhost_blk.o 00:02:09.232 CC lib/vhost/rte_vhost_user.o 00:02:09.232 CC lib/iscsi/conn.o 00:02:09.232 CC lib/iscsi/init_grp.o 00:02:09.232 CC lib/iscsi/md5.o 00:02:09.232 CC lib/iscsi/iscsi.o 00:02:09.232 CC lib/iscsi/param.o 00:02:09.232 CC lib/iscsi/portal_grp.o 00:02:09.232 CC lib/iscsi/tgt_node.o 00:02:09.232 CC lib/iscsi/iscsi_subsystem.o 00:02:09.232 CC lib/iscsi/iscsi_rpc.o 00:02:09.232 CC lib/iscsi/task.o 00:02:09.491 SYMLINK libspdk_ftl.so 00:02:09.751 LIB libspdk_nvmf.a 00:02:10.011 SO libspdk_nvmf.so.19.0 00:02:10.011 LIB libspdk_vhost.a 00:02:10.011 SO libspdk_vhost.so.8.0 00:02:10.011 SYMLINK libspdk_nvmf.so 00:02:10.011 SYMLINK libspdk_vhost.so 00:02:10.271 LIB libspdk_iscsi.a 00:02:10.271 SO libspdk_iscsi.so.8.0 00:02:10.271 SYMLINK libspdk_iscsi.so 00:02:10.841 CC module/vfu_device/vfu_virtio.o 00:02:10.841 CC module/vfu_device/vfu_virtio_blk.o 00:02:10.841 CC module/vfu_device/vfu_virtio_scsi.o 00:02:10.841 CC module/vfu_device/vfu_virtio_rpc.o 00:02:10.841 CC module/env_dpdk/env_dpdk_rpc.o 00:02:11.100 CC module/accel/iaa/accel_iaa.o 00:02:11.100 CC module/accel/iaa/accel_iaa_rpc.o 00:02:11.100 CC module/accel/dsa/accel_dsa.o 00:02:11.100 CC module/sock/posix/posix.o 00:02:11.100 CC module/accel/dsa/accel_dsa_rpc.o 00:02:11.100 CC module/accel/error/accel_error.o 00:02:11.100 CC module/accel/error/accel_error_rpc.o 00:02:11.100 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:11.100 CC module/keyring/linux/keyring_rpc.o 00:02:11.100 CC module/keyring/linux/keyring.o 00:02:11.100 CC module/blob/bdev/blob_bdev.o 00:02:11.100 LIB libspdk_env_dpdk_rpc.a 00:02:11.100 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:11.100 CC module/accel/ioat/accel_ioat.o 00:02:11.100 CC module/keyring/file/keyring_rpc.o 00:02:11.100 CC module/keyring/file/keyring.o 00:02:11.100 CC module/accel/ioat/accel_ioat_rpc.o 00:02:11.100 CC module/scheduler/gscheduler/gscheduler.o 00:02:11.100 SO libspdk_env_dpdk_rpc.so.6.0 00:02:11.100 SYMLINK libspdk_env_dpdk_rpc.so 00:02:11.100 LIB libspdk_keyring_linux.a 00:02:11.100 LIB libspdk_accel_error.a 00:02:11.100 LIB libspdk_scheduler_dpdk_governor.a 00:02:11.100 LIB libspdk_scheduler_gscheduler.a 00:02:11.100 LIB libspdk_keyring_file.a 00:02:11.100 LIB libspdk_accel_iaa.a 00:02:11.100 LIB libspdk_accel_ioat.a 00:02:11.100 SO libspdk_keyring_linux.so.1.0 00:02:11.100 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:11.100 LIB libspdk_scheduler_dynamic.a 00:02:11.100 SO libspdk_accel_error.so.2.0 00:02:11.100 SO libspdk_scheduler_gscheduler.so.4.0 00:02:11.100 SO libspdk_accel_iaa.so.3.0 00:02:11.100 SO libspdk_keyring_file.so.1.0 00:02:11.101 SO libspdk_accel_ioat.so.6.0 00:02:11.101 SO libspdk_scheduler_dynamic.so.4.0 00:02:11.101 LIB libspdk_accel_dsa.a 00:02:11.360 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:11.360 SYMLINK libspdk_accel_error.so 00:02:11.360 LIB libspdk_blob_bdev.a 00:02:11.360 SYMLINK libspdk_keyring_linux.so 00:02:11.360 SYMLINK libspdk_keyring_file.so 00:02:11.360 SO libspdk_accel_dsa.so.5.0 00:02:11.360 SYMLINK libspdk_scheduler_gscheduler.so 00:02:11.360 SYMLINK libspdk_accel_iaa.so 00:02:11.360 SYMLINK libspdk_scheduler_dynamic.so 00:02:11.360 SYMLINK libspdk_accel_ioat.so 00:02:11.360 SO libspdk_blob_bdev.so.11.0 00:02:11.360 SYMLINK libspdk_accel_dsa.so 00:02:11.360 SYMLINK libspdk_blob_bdev.so 00:02:11.360 LIB libspdk_vfu_device.a 00:02:11.360 SO libspdk_vfu_device.so.3.0 00:02:11.360 SYMLINK libspdk_vfu_device.so 00:02:11.619 LIB libspdk_sock_posix.a 00:02:11.619 SO libspdk_sock_posix.so.6.0 00:02:11.619 SYMLINK libspdk_sock_posix.so 00:02:11.877 CC module/bdev/delay/vbdev_delay.o 00:02:11.877 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:11.877 CC module/bdev/error/vbdev_error.o 00:02:11.877 CC module/bdev/error/vbdev_error_rpc.o 00:02:11.877 CC module/bdev/ftl/bdev_ftl.o 00:02:11.877 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:11.877 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:11.877 CC module/bdev/nvme/bdev_nvme.o 00:02:11.877 CC module/bdev/nvme/nvme_rpc.o 00:02:11.877 CC module/bdev/iscsi/bdev_iscsi.o 00:02:11.877 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:11.877 CC module/bdev/nvme/bdev_mdns_client.o 00:02:11.877 CC module/bdev/nvme/vbdev_opal.o 00:02:11.877 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:11.877 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:11.877 CC module/bdev/null/bdev_null_rpc.o 00:02:11.877 CC module/bdev/null/bdev_null.o 00:02:11.877 CC module/bdev/aio/bdev_aio.o 00:02:11.877 CC module/bdev/aio/bdev_aio_rpc.o 00:02:11.877 CC module/bdev/gpt/gpt.o 00:02:11.877 CC module/blobfs/bdev/blobfs_bdev.o 00:02:11.877 CC module/bdev/gpt/vbdev_gpt.o 00:02:11.877 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:11.877 CC module/bdev/lvol/vbdev_lvol.o 00:02:11.877 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:11.877 CC module/bdev/raid/bdev_raid.o 00:02:11.877 CC module/bdev/raid/bdev_raid_rpc.o 00:02:11.877 CC module/bdev/raid/bdev_raid_sb.o 00:02:11.877 CC module/bdev/raid/raid1.o 00:02:11.877 CC module/bdev/raid/raid0.o 00:02:11.877 CC module/bdev/raid/concat.o 00:02:11.877 CC module/bdev/split/vbdev_split.o 00:02:11.877 CC module/bdev/split/vbdev_split_rpc.o 00:02:11.877 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:11.877 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:11.877 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:11.877 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:11.877 CC module/bdev/passthru/vbdev_passthru.o 00:02:11.877 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:11.877 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:11.877 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:11.877 CC module/bdev/malloc/bdev_malloc.o 00:02:11.877 LIB libspdk_blobfs_bdev.a 00:02:12.135 SO libspdk_blobfs_bdev.so.6.0 00:02:12.135 LIB libspdk_bdev_null.a 00:02:12.135 LIB libspdk_bdev_split.a 00:02:12.135 LIB libspdk_bdev_gpt.a 00:02:12.135 LIB libspdk_bdev_error.a 00:02:12.135 SO libspdk_bdev_null.so.6.0 00:02:12.135 SYMLINK libspdk_blobfs_bdev.so 00:02:12.135 LIB libspdk_bdev_ftl.a 00:02:12.135 SO libspdk_bdev_split.so.6.0 00:02:12.135 SO libspdk_bdev_gpt.so.6.0 00:02:12.135 SO libspdk_bdev_error.so.6.0 00:02:12.135 LIB libspdk_bdev_aio.a 00:02:12.135 LIB libspdk_bdev_passthru.a 00:02:12.135 SO libspdk_bdev_ftl.so.6.0 00:02:12.135 LIB libspdk_bdev_delay.a 00:02:12.135 SYMLINK libspdk_bdev_null.so 00:02:12.135 LIB libspdk_bdev_zone_block.a 00:02:12.135 SYMLINK libspdk_bdev_split.so 00:02:12.135 SO libspdk_bdev_passthru.so.6.0 00:02:12.135 SO libspdk_bdev_aio.so.6.0 00:02:12.135 SO libspdk_bdev_zone_block.so.6.0 00:02:12.135 LIB libspdk_bdev_iscsi.a 00:02:12.135 SO libspdk_bdev_delay.so.6.0 00:02:12.135 LIB libspdk_bdev_malloc.a 00:02:12.135 SYMLINK libspdk_bdev_gpt.so 00:02:12.135 SYMLINK libspdk_bdev_error.so 00:02:12.135 SYMLINK libspdk_bdev_ftl.so 00:02:12.135 SO libspdk_bdev_iscsi.so.6.0 00:02:12.135 SO libspdk_bdev_malloc.so.6.0 00:02:12.135 SYMLINK libspdk_bdev_passthru.so 00:02:12.135 SYMLINK libspdk_bdev_aio.so 00:02:12.135 SYMLINK libspdk_bdev_zone_block.so 00:02:12.135 SYMLINK libspdk_bdev_delay.so 00:02:12.135 SYMLINK libspdk_bdev_iscsi.so 00:02:12.393 LIB libspdk_bdev_lvol.a 00:02:12.393 LIB libspdk_bdev_virtio.a 00:02:12.393 SYMLINK libspdk_bdev_malloc.so 00:02:12.393 SO libspdk_bdev_lvol.so.6.0 00:02:12.393 SO libspdk_bdev_virtio.so.6.0 00:02:12.393 SYMLINK libspdk_bdev_lvol.so 00:02:12.393 SYMLINK libspdk_bdev_virtio.so 00:02:12.652 LIB libspdk_bdev_raid.a 00:02:12.652 SO libspdk_bdev_raid.so.6.0 00:02:12.652 SYMLINK libspdk_bdev_raid.so 00:02:13.590 LIB libspdk_bdev_nvme.a 00:02:13.590 SO libspdk_bdev_nvme.so.7.0 00:02:13.590 SYMLINK libspdk_bdev_nvme.so 00:02:14.158 CC module/event/subsystems/vmd/vmd.o 00:02:14.158 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:14.158 CC module/event/subsystems/keyring/keyring.o 00:02:14.158 CC module/event/subsystems/iobuf/iobuf.o 00:02:14.158 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:14.158 CC module/event/subsystems/scheduler/scheduler.o 00:02:14.158 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:14.158 CC module/event/subsystems/sock/sock.o 00:02:14.158 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:14.158 LIB libspdk_event_scheduler.a 00:02:14.158 LIB libspdk_event_vhost_blk.a 00:02:14.158 LIB libspdk_event_keyring.a 00:02:14.158 LIB libspdk_event_vmd.a 00:02:14.158 LIB libspdk_event_sock.a 00:02:14.158 LIB libspdk_event_iobuf.a 00:02:14.417 SO libspdk_event_scheduler.so.4.0 00:02:14.417 SO libspdk_event_vhost_blk.so.3.0 00:02:14.417 LIB libspdk_event_vfu_tgt.a 00:02:14.418 SO libspdk_event_vmd.so.6.0 00:02:14.418 SO libspdk_event_sock.so.5.0 00:02:14.418 SO libspdk_event_keyring.so.1.0 00:02:14.418 SO libspdk_event_iobuf.so.3.0 00:02:14.418 SO libspdk_event_vfu_tgt.so.3.0 00:02:14.418 SYMLINK libspdk_event_scheduler.so 00:02:14.418 SYMLINK libspdk_event_vmd.so 00:02:14.418 SYMLINK libspdk_event_sock.so 00:02:14.418 SYMLINK libspdk_event_vhost_blk.so 00:02:14.418 SYMLINK libspdk_event_keyring.so 00:02:14.418 SYMLINK libspdk_event_iobuf.so 00:02:14.418 SYMLINK libspdk_event_vfu_tgt.so 00:02:14.677 CC module/event/subsystems/accel/accel.o 00:02:14.936 LIB libspdk_event_accel.a 00:02:14.936 SO libspdk_event_accel.so.6.0 00:02:14.936 SYMLINK libspdk_event_accel.so 00:02:15.195 CC module/event/subsystems/bdev/bdev.o 00:02:15.476 LIB libspdk_event_bdev.a 00:02:15.476 SO libspdk_event_bdev.so.6.0 00:02:15.476 SYMLINK libspdk_event_bdev.so 00:02:15.745 CC module/event/subsystems/nbd/nbd.o 00:02:15.745 CC module/event/subsystems/scsi/scsi.o 00:02:15.745 CC module/event/subsystems/ublk/ublk.o 00:02:15.745 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:15.745 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:15.745 LIB libspdk_event_nbd.a 00:02:15.745 LIB libspdk_event_ublk.a 00:02:16.004 LIB libspdk_event_scsi.a 00:02:16.004 SO libspdk_event_nbd.so.6.0 00:02:16.004 SO libspdk_event_ublk.so.3.0 00:02:16.004 SO libspdk_event_scsi.so.6.0 00:02:16.004 SYMLINK libspdk_event_nbd.so 00:02:16.004 LIB libspdk_event_nvmf.a 00:02:16.004 SYMLINK libspdk_event_ublk.so 00:02:16.004 SYMLINK libspdk_event_scsi.so 00:02:16.004 SO libspdk_event_nvmf.so.6.0 00:02:16.004 SYMLINK libspdk_event_nvmf.so 00:02:16.264 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:16.264 CC module/event/subsystems/iscsi/iscsi.o 00:02:16.264 LIB libspdk_event_vhost_scsi.a 00:02:16.523 SO libspdk_event_vhost_scsi.so.3.0 00:02:16.523 LIB libspdk_event_iscsi.a 00:02:16.523 SYMLINK libspdk_event_vhost_scsi.so 00:02:16.523 SO libspdk_event_iscsi.so.6.0 00:02:16.523 SYMLINK libspdk_event_iscsi.so 00:02:16.782 SO libspdk.so.6.0 00:02:16.782 SYMLINK libspdk.so 00:02:17.045 TEST_HEADER include/spdk/accel.h 00:02:17.045 TEST_HEADER include/spdk/accel_module.h 00:02:17.045 TEST_HEADER include/spdk/assert.h 00:02:17.045 TEST_HEADER include/spdk/barrier.h 00:02:17.045 TEST_HEADER include/spdk/base64.h 00:02:17.045 TEST_HEADER include/spdk/bdev.h 00:02:17.045 TEST_HEADER include/spdk/bdev_module.h 00:02:17.045 TEST_HEADER include/spdk/bit_pool.h 00:02:17.045 TEST_HEADER include/spdk/bdev_zone.h 00:02:17.045 TEST_HEADER include/spdk/blob_bdev.h 00:02:17.045 TEST_HEADER include/spdk/bit_array.h 00:02:17.045 CC test/rpc_client/rpc_client_test.o 00:02:17.045 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:17.045 TEST_HEADER include/spdk/blobfs.h 00:02:17.045 TEST_HEADER include/spdk/blob.h 00:02:17.045 TEST_HEADER include/spdk/config.h 00:02:17.045 TEST_HEADER include/spdk/cpuset.h 00:02:17.045 TEST_HEADER include/spdk/conf.h 00:02:17.045 TEST_HEADER include/spdk/crc16.h 00:02:17.045 TEST_HEADER include/spdk/crc32.h 00:02:17.045 TEST_HEADER include/spdk/dif.h 00:02:17.045 TEST_HEADER include/spdk/crc64.h 00:02:17.045 TEST_HEADER include/spdk/dma.h 00:02:17.045 TEST_HEADER include/spdk/endian.h 00:02:17.045 TEST_HEADER include/spdk/env_dpdk.h 00:02:17.045 TEST_HEADER include/spdk/env.h 00:02:17.045 TEST_HEADER include/spdk/fd.h 00:02:17.045 TEST_HEADER include/spdk/event.h 00:02:17.045 TEST_HEADER include/spdk/file.h 00:02:17.045 TEST_HEADER include/spdk/fd_group.h 00:02:17.045 TEST_HEADER include/spdk/ftl.h 00:02:17.045 TEST_HEADER include/spdk/gpt_spec.h 00:02:17.045 TEST_HEADER include/spdk/hexlify.h 00:02:17.045 TEST_HEADER include/spdk/histogram_data.h 00:02:17.045 TEST_HEADER include/spdk/idxd.h 00:02:17.045 TEST_HEADER include/spdk/init.h 00:02:17.045 TEST_HEADER include/spdk/idxd_spec.h 00:02:17.045 TEST_HEADER include/spdk/ioat.h 00:02:17.045 TEST_HEADER include/spdk/ioat_spec.h 00:02:17.045 TEST_HEADER include/spdk/iscsi_spec.h 00:02:17.045 CC app/trace_record/trace_record.o 00:02:17.045 TEST_HEADER include/spdk/keyring.h 00:02:17.045 TEST_HEADER include/spdk/json.h 00:02:17.045 TEST_HEADER include/spdk/jsonrpc.h 00:02:17.045 TEST_HEADER include/spdk/keyring_module.h 00:02:17.045 TEST_HEADER include/spdk/likely.h 00:02:17.045 TEST_HEADER include/spdk/log.h 00:02:17.045 TEST_HEADER include/spdk/lvol.h 00:02:17.045 TEST_HEADER include/spdk/mmio.h 00:02:17.045 TEST_HEADER include/spdk/nbd.h 00:02:17.045 TEST_HEADER include/spdk/net.h 00:02:17.045 TEST_HEADER include/spdk/memory.h 00:02:17.045 CC app/spdk_nvme_discover/discovery_aer.o 00:02:17.045 TEST_HEADER include/spdk/nvme_intel.h 00:02:17.045 TEST_HEADER include/spdk/notify.h 00:02:17.045 CXX app/trace/trace.o 00:02:17.045 TEST_HEADER include/spdk/nvme.h 00:02:17.045 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:17.045 TEST_HEADER include/spdk/nvme_spec.h 00:02:17.045 CC app/spdk_lspci/spdk_lspci.o 00:02:17.045 CC app/spdk_nvme_perf/perf.o 00:02:17.045 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:17.045 TEST_HEADER include/spdk/nvme_zns.h 00:02:17.045 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:17.045 TEST_HEADER include/spdk/nvmf.h 00:02:17.045 TEST_HEADER include/spdk/nvmf_spec.h 00:02:17.045 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:17.045 TEST_HEADER include/spdk/nvmf_transport.h 00:02:17.045 TEST_HEADER include/spdk/opal.h 00:02:17.045 TEST_HEADER include/spdk/opal_spec.h 00:02:17.045 TEST_HEADER include/spdk/pipe.h 00:02:17.045 TEST_HEADER include/spdk/queue.h 00:02:17.045 TEST_HEADER include/spdk/pci_ids.h 00:02:17.045 TEST_HEADER include/spdk/rpc.h 00:02:17.045 TEST_HEADER include/spdk/scheduler.h 00:02:17.045 TEST_HEADER include/spdk/reduce.h 00:02:17.045 TEST_HEADER include/spdk/scsi_spec.h 00:02:17.045 CC app/spdk_top/spdk_top.o 00:02:17.045 TEST_HEADER include/spdk/scsi.h 00:02:17.045 TEST_HEADER include/spdk/sock.h 00:02:17.045 TEST_HEADER include/spdk/string.h 00:02:17.045 TEST_HEADER include/spdk/stdinc.h 00:02:17.045 TEST_HEADER include/spdk/trace.h 00:02:17.045 TEST_HEADER include/spdk/thread.h 00:02:17.045 TEST_HEADER include/spdk/tree.h 00:02:17.045 TEST_HEADER include/spdk/trace_parser.h 00:02:17.045 CC app/spdk_nvme_identify/identify.o 00:02:17.045 TEST_HEADER include/spdk/util.h 00:02:17.045 TEST_HEADER include/spdk/ublk.h 00:02:17.045 TEST_HEADER include/spdk/uuid.h 00:02:17.045 TEST_HEADER include/spdk/version.h 00:02:17.045 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:17.045 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:17.045 TEST_HEADER include/spdk/vmd.h 00:02:17.045 TEST_HEADER include/spdk/vhost.h 00:02:17.045 TEST_HEADER include/spdk/xor.h 00:02:17.045 TEST_HEADER include/spdk/zipf.h 00:02:17.045 CXX test/cpp_headers/accel.o 00:02:17.045 CXX test/cpp_headers/assert.o 00:02:17.045 CXX test/cpp_headers/accel_module.o 00:02:17.045 CXX test/cpp_headers/base64.o 00:02:17.045 CXX test/cpp_headers/barrier.o 00:02:17.045 CC app/iscsi_tgt/iscsi_tgt.o 00:02:17.045 CXX test/cpp_headers/bdev.o 00:02:17.045 CXX test/cpp_headers/bdev_module.o 00:02:17.045 CXX test/cpp_headers/bdev_zone.o 00:02:17.045 CXX test/cpp_headers/bit_array.o 00:02:17.045 CXX test/cpp_headers/blob_bdev.o 00:02:17.045 CXX test/cpp_headers/bit_pool.o 00:02:17.045 CXX test/cpp_headers/blobfs_bdev.o 00:02:17.045 CXX test/cpp_headers/blobfs.o 00:02:17.045 CXX test/cpp_headers/config.o 00:02:17.045 CXX test/cpp_headers/conf.o 00:02:17.045 CXX test/cpp_headers/cpuset.o 00:02:17.045 CXX test/cpp_headers/blob.o 00:02:17.045 CXX test/cpp_headers/crc64.o 00:02:17.045 CXX test/cpp_headers/crc32.o 00:02:17.045 CXX test/cpp_headers/dif.o 00:02:17.045 CXX test/cpp_headers/crc16.o 00:02:17.045 CC app/nvmf_tgt/nvmf_main.o 00:02:17.045 CXX test/cpp_headers/dma.o 00:02:17.045 CXX test/cpp_headers/env_dpdk.o 00:02:17.045 CXX test/cpp_headers/env.o 00:02:17.045 CXX test/cpp_headers/endian.o 00:02:17.045 CXX test/cpp_headers/fd.o 00:02:17.045 CXX test/cpp_headers/event.o 00:02:17.045 CXX test/cpp_headers/file.o 00:02:17.045 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:17.045 CXX test/cpp_headers/gpt_spec.o 00:02:17.045 CXX test/cpp_headers/hexlify.o 00:02:17.045 CXX test/cpp_headers/fd_group.o 00:02:17.045 CXX test/cpp_headers/idxd.o 00:02:17.045 CXX test/cpp_headers/histogram_data.o 00:02:17.045 CXX test/cpp_headers/init.o 00:02:17.045 CXX test/cpp_headers/ftl.o 00:02:17.045 CXX test/cpp_headers/idxd_spec.o 00:02:17.045 CXX test/cpp_headers/ioat.o 00:02:17.045 CXX test/cpp_headers/json.o 00:02:17.045 CC app/spdk_dd/spdk_dd.o 00:02:17.045 CXX test/cpp_headers/ioat_spec.o 00:02:17.045 CXX test/cpp_headers/jsonrpc.o 00:02:17.045 CXX test/cpp_headers/keyring.o 00:02:17.045 CXX test/cpp_headers/iscsi_spec.o 00:02:17.045 CXX test/cpp_headers/likely.o 00:02:17.045 CXX test/cpp_headers/log.o 00:02:17.045 CXX test/cpp_headers/lvol.o 00:02:17.045 CXX test/cpp_headers/memory.o 00:02:17.045 CXX test/cpp_headers/keyring_module.o 00:02:17.045 CXX test/cpp_headers/nbd.o 00:02:17.045 CXX test/cpp_headers/mmio.o 00:02:17.045 CXX test/cpp_headers/nvme.o 00:02:17.045 CXX test/cpp_headers/notify.o 00:02:17.045 CXX test/cpp_headers/net.o 00:02:17.045 CXX test/cpp_headers/nvme_intel.o 00:02:17.045 CXX test/cpp_headers/nvme_ocssd.o 00:02:17.045 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:17.045 CXX test/cpp_headers/nvme_spec.o 00:02:17.045 CXX test/cpp_headers/nvme_zns.o 00:02:17.045 CXX test/cpp_headers/nvmf_cmd.o 00:02:17.045 CXX test/cpp_headers/nvmf.o 00:02:17.045 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:17.045 CXX test/cpp_headers/nvmf_spec.o 00:02:17.045 CXX test/cpp_headers/nvmf_transport.o 00:02:17.045 CXX test/cpp_headers/opal.o 00:02:17.045 CXX test/cpp_headers/opal_spec.o 00:02:17.045 CXX test/cpp_headers/pci_ids.o 00:02:17.045 CXX test/cpp_headers/pipe.o 00:02:17.045 CXX test/cpp_headers/queue.o 00:02:17.045 CC app/spdk_tgt/spdk_tgt.o 00:02:17.045 CC test/app/jsoncat/jsoncat.o 00:02:17.045 CC test/env/pci/pci_ut.o 00:02:17.045 CC test/thread/poller_perf/poller_perf.o 00:02:17.045 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:17.317 CC test/env/vtophys/vtophys.o 00:02:17.317 CC test/app/stub/stub.o 00:02:17.317 CC test/app/histogram_perf/histogram_perf.o 00:02:17.317 CC test/env/memory/memory_ut.o 00:02:17.317 CC test/app/bdev_svc/bdev_svc.o 00:02:17.317 CXX test/cpp_headers/reduce.o 00:02:17.317 CC examples/ioat/verify/verify.o 00:02:17.317 CC test/dma/test_dma/test_dma.o 00:02:17.317 CC app/fio/nvme/fio_plugin.o 00:02:17.317 CC examples/ioat/perf/perf.o 00:02:17.317 CC examples/util/zipf/zipf.o 00:02:17.317 CC app/fio/bdev/fio_plugin.o 00:02:17.581 LINK spdk_lspci 00:02:17.581 LINK spdk_nvme_discover 00:02:17.581 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:17.581 CC test/env/mem_callbacks/mem_callbacks.o 00:02:17.581 LINK spdk_trace_record 00:02:17.581 LINK rpc_client_test 00:02:17.581 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:17.581 LINK interrupt_tgt 00:02:17.581 LINK jsoncat 00:02:17.581 LINK nvmf_tgt 00:02:17.581 CXX test/cpp_headers/scheduler.o 00:02:17.581 CXX test/cpp_headers/scsi.o 00:02:17.581 CXX test/cpp_headers/scsi_spec.o 00:02:17.581 CXX test/cpp_headers/rpc.o 00:02:17.581 CXX test/cpp_headers/sock.o 00:02:17.581 CXX test/cpp_headers/stdinc.o 00:02:17.581 CXX test/cpp_headers/string.o 00:02:17.581 CXX test/cpp_headers/thread.o 00:02:17.581 CXX test/cpp_headers/trace.o 00:02:17.581 CXX test/cpp_headers/trace_parser.o 00:02:17.581 CXX test/cpp_headers/tree.o 00:02:17.581 CXX test/cpp_headers/ublk.o 00:02:17.581 CXX test/cpp_headers/util.o 00:02:17.581 CXX test/cpp_headers/uuid.o 00:02:17.581 CXX test/cpp_headers/version.o 00:02:17.581 CXX test/cpp_headers/vfio_user_pci.o 00:02:17.581 CXX test/cpp_headers/vfio_user_spec.o 00:02:17.581 LINK stub 00:02:17.581 CXX test/cpp_headers/vmd.o 00:02:17.581 CXX test/cpp_headers/vhost.o 00:02:17.581 CXX test/cpp_headers/xor.o 00:02:17.581 CXX test/cpp_headers/zipf.o 00:02:17.845 LINK zipf 00:02:17.845 LINK spdk_tgt 00:02:17.845 LINK iscsi_tgt 00:02:17.845 LINK poller_perf 00:02:17.845 LINK histogram_perf 00:02:17.845 LINK verify 00:02:17.845 LINK env_dpdk_post_init 00:02:17.845 LINK vtophys 00:02:17.845 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:17.845 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:17.845 LINK bdev_svc 00:02:17.845 LINK spdk_dd 00:02:17.845 LINK ioat_perf 00:02:17.845 LINK pci_ut 00:02:18.104 LINK spdk_trace 00:02:18.104 LINK test_dma 00:02:18.104 LINK spdk_bdev 00:02:18.104 LINK spdk_nvme 00:02:18.104 LINK spdk_nvme_identify 00:02:18.104 LINK nvme_fuzz 00:02:18.104 CC test/event/reactor_perf/reactor_perf.o 00:02:18.104 CC test/event/event_perf/event_perf.o 00:02:18.104 LINK vhost_fuzz 00:02:18.104 CC examples/vmd/lsvmd/lsvmd.o 00:02:18.104 CC test/event/reactor/reactor.o 00:02:18.362 LINK spdk_nvme_perf 00:02:18.362 CC examples/vmd/led/led.o 00:02:18.362 CC examples/idxd/perf/perf.o 00:02:18.362 LINK mem_callbacks 00:02:18.362 CC test/event/app_repeat/app_repeat.o 00:02:18.362 CC test/event/scheduler/scheduler.o 00:02:18.362 CC examples/sock/hello_world/hello_sock.o 00:02:18.362 CC examples/thread/thread/thread_ex.o 00:02:18.362 LINK reactor_perf 00:02:18.362 LINK spdk_top 00:02:18.362 LINK event_perf 00:02:18.362 CC app/vhost/vhost.o 00:02:18.362 LINK led 00:02:18.362 LINK lsvmd 00:02:18.362 LINK reactor 00:02:18.362 LINK app_repeat 00:02:18.362 LINK scheduler 00:02:18.362 LINK hello_sock 00:02:18.362 CC test/nvme/err_injection/err_injection.o 00:02:18.620 CC test/nvme/boot_partition/boot_partition.o 00:02:18.620 CC test/nvme/aer/aer.o 00:02:18.620 CC test/nvme/e2edp/nvme_dp.o 00:02:18.620 CC test/nvme/sgl/sgl.o 00:02:18.620 CC test/nvme/cuse/cuse.o 00:02:18.620 CC test/nvme/overhead/overhead.o 00:02:18.620 CC test/nvme/fused_ordering/fused_ordering.o 00:02:18.620 CC test/nvme/simple_copy/simple_copy.o 00:02:18.620 CC test/nvme/reserve/reserve.o 00:02:18.620 CC test/nvme/reset/reset.o 00:02:18.620 CC test/nvme/compliance/nvme_compliance.o 00:02:18.620 CC test/nvme/startup/startup.o 00:02:18.620 CC test/nvme/connect_stress/connect_stress.o 00:02:18.620 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:18.620 CC test/nvme/fdp/fdp.o 00:02:18.620 LINK thread 00:02:18.620 LINK idxd_perf 00:02:18.620 CC test/blobfs/mkfs/mkfs.o 00:02:18.620 LINK vhost 00:02:18.620 CC test/accel/dif/dif.o 00:02:18.620 LINK memory_ut 00:02:18.620 CC test/lvol/esnap/esnap.o 00:02:18.620 LINK boot_partition 00:02:18.620 LINK err_injection 00:02:18.620 LINK connect_stress 00:02:18.620 LINK reserve 00:02:18.620 LINK doorbell_aers 00:02:18.620 LINK fused_ordering 00:02:18.620 LINK startup 00:02:18.620 LINK simple_copy 00:02:18.620 LINK sgl 00:02:18.620 LINK reset 00:02:18.878 LINK mkfs 00:02:18.878 LINK overhead 00:02:18.878 LINK nvme_dp 00:02:18.878 LINK aer 00:02:18.878 LINK nvme_compliance 00:02:18.878 LINK fdp 00:02:18.878 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:18.878 CC examples/nvme/reconnect/reconnect.o 00:02:18.878 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:18.878 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:18.878 CC examples/nvme/hello_world/hello_world.o 00:02:18.878 CC examples/nvme/hotplug/hotplug.o 00:02:18.878 CC examples/nvme/arbitration/arbitration.o 00:02:18.878 CC examples/nvme/abort/abort.o 00:02:18.878 LINK dif 00:02:19.137 CC examples/accel/perf/accel_perf.o 00:02:19.137 LINK iscsi_fuzz 00:02:19.137 LINK cmb_copy 00:02:19.137 LINK pmr_persistence 00:02:19.137 CC examples/blob/hello_world/hello_blob.o 00:02:19.137 CC examples/blob/cli/blobcli.o 00:02:19.137 LINK hello_world 00:02:19.137 LINK hotplug 00:02:19.137 LINK arbitration 00:02:19.137 LINK reconnect 00:02:19.137 LINK abort 00:02:19.395 LINK nvme_manage 00:02:19.395 LINK hello_blob 00:02:19.395 LINK accel_perf 00:02:19.395 CC test/bdev/bdevio/bdevio.o 00:02:19.395 LINK blobcli 00:02:19.395 LINK cuse 00:02:19.963 LINK bdevio 00:02:19.963 CC examples/bdev/bdevperf/bdevperf.o 00:02:19.963 CC examples/bdev/hello_world/hello_bdev.o 00:02:20.222 LINK hello_bdev 00:02:20.482 LINK bdevperf 00:02:21.057 CC examples/nvmf/nvmf/nvmf.o 00:02:21.057 LINK nvmf 00:02:22.001 LINK esnap 00:02:22.260 00:02:22.260 real 0m44.164s 00:02:22.260 user 6m29.970s 00:02:22.260 sys 3m27.320s 00:02:22.260 21:27:30 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:22.260 21:27:30 make -- common/autotest_common.sh@10 -- $ set +x 00:02:22.260 ************************************ 00:02:22.260 END TEST make 00:02:22.260 ************************************ 00:02:22.260 21:27:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:22.260 21:27:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:22.260 21:27:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:22.260 21:27:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.260 21:27:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:22.260 21:27:30 -- pm/common@44 -- $ pid=2766389 00:02:22.260 21:27:30 -- pm/common@50 -- $ kill -TERM 2766389 00:02:22.260 21:27:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.260 21:27:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:22.260 21:27:30 -- pm/common@44 -- $ pid=2766390 00:02:22.260 21:27:30 -- pm/common@50 -- $ kill -TERM 2766390 00:02:22.260 21:27:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.260 21:27:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:22.260 21:27:30 -- pm/common@44 -- $ pid=2766392 00:02:22.260 21:27:30 -- pm/common@50 -- $ kill -TERM 2766392 00:02:22.260 21:27:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.260 21:27:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:22.260 21:27:30 -- pm/common@44 -- $ pid=2766418 00:02:22.260 21:27:30 -- pm/common@50 -- $ sudo -E kill -TERM 2766418 00:02:22.519 21:27:30 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:22.519 21:27:30 -- nvmf/common.sh@7 -- # uname -s 00:02:22.519 21:27:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:22.519 21:27:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:22.519 21:27:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:22.519 21:27:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:22.519 21:27:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:22.519 21:27:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:22.519 21:27:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:22.519 21:27:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:22.519 21:27:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:22.519 21:27:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:22.519 21:27:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:22.519 21:27:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:22.519 21:27:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:22.519 21:27:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:22.519 21:27:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:22.519 21:27:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:22.519 21:27:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:22.519 21:27:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:22.519 21:27:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:22.519 21:27:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:22.519 21:27:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.519 21:27:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.519 21:27:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.519 21:27:30 -- paths/export.sh@5 -- # export PATH 00:02:22.519 21:27:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.519 21:27:30 -- nvmf/common.sh@47 -- # : 0 00:02:22.519 21:27:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:22.519 21:27:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:22.519 21:27:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:22.519 21:27:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:22.519 21:27:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:22.519 21:27:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:22.519 21:27:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:22.519 21:27:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:22.519 21:27:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:22.519 21:27:30 -- spdk/autotest.sh@32 -- # uname -s 00:02:22.519 21:27:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:22.519 21:27:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:22.519 21:27:30 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:22.519 21:27:30 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:22.519 21:27:30 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:22.519 21:27:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:22.519 21:27:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:22.519 21:27:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:22.519 21:27:30 -- spdk/autotest.sh@48 -- # udevadm_pid=2825901 00:02:22.519 21:27:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:22.519 21:27:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:22.519 21:27:30 -- pm/common@17 -- # local monitor 00:02:22.519 21:27:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.519 21:27:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.519 21:27:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.519 21:27:30 -- pm/common@21 -- # date +%s 00:02:22.519 21:27:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.519 21:27:30 -- pm/common@21 -- # date +%s 00:02:22.519 21:27:30 -- pm/common@25 -- # sleep 1 00:02:22.519 21:27:30 -- pm/common@21 -- # date +%s 00:02:22.519 21:27:30 -- pm/common@21 -- # date +%s 00:02:22.519 21:27:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721849250 00:02:22.519 21:27:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721849250 00:02:22.519 21:27:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721849250 00:02:22.519 21:27:30 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721849250 00:02:22.519 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721849250_collect-vmstat.pm.log 00:02:22.519 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721849250_collect-cpu-load.pm.log 00:02:22.519 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721849250_collect-cpu-temp.pm.log 00:02:22.519 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721849250_collect-bmc-pm.bmc.pm.log 00:02:23.455 21:27:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:23.455 21:27:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:23.455 21:27:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:23.455 21:27:31 -- common/autotest_common.sh@10 -- # set +x 00:02:23.455 21:27:31 -- spdk/autotest.sh@59 -- # create_test_list 00:02:23.455 21:27:31 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:23.455 21:27:31 -- common/autotest_common.sh@10 -- # set +x 00:02:23.455 21:27:31 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:23.455 21:27:31 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:23.455 21:27:31 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:23.455 21:27:31 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:23.455 21:27:31 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:23.455 21:27:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:23.455 21:27:31 -- common/autotest_common.sh@1453 -- # uname 00:02:23.714 21:27:31 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:02:23.714 21:27:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:23.714 21:27:31 -- common/autotest_common.sh@1473 -- # uname 00:02:23.714 21:27:31 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:02:23.714 21:27:31 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:23.714 21:27:31 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:23.714 21:27:31 -- spdk/autotest.sh@72 -- # hash lcov 00:02:23.714 21:27:31 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:23.714 21:27:31 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:23.714 --rc lcov_branch_coverage=1 00:02:23.714 --rc lcov_function_coverage=1 00:02:23.714 --rc genhtml_branch_coverage=1 00:02:23.714 --rc genhtml_function_coverage=1 00:02:23.714 --rc genhtml_legend=1 00:02:23.714 --rc geninfo_all_blocks=1 00:02:23.714 ' 00:02:23.714 21:27:31 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:23.714 --rc lcov_branch_coverage=1 00:02:23.714 --rc lcov_function_coverage=1 00:02:23.714 --rc genhtml_branch_coverage=1 00:02:23.714 --rc genhtml_function_coverage=1 00:02:23.714 --rc genhtml_legend=1 00:02:23.714 --rc geninfo_all_blocks=1 00:02:23.714 ' 00:02:23.714 21:27:31 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:23.714 --rc lcov_branch_coverage=1 00:02:23.714 --rc lcov_function_coverage=1 00:02:23.714 --rc genhtml_branch_coverage=1 00:02:23.714 --rc genhtml_function_coverage=1 00:02:23.714 --rc genhtml_legend=1 00:02:23.714 --rc geninfo_all_blocks=1 00:02:23.714 --no-external' 00:02:23.714 21:27:31 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:23.714 --rc lcov_branch_coverage=1 00:02:23.714 --rc lcov_function_coverage=1 00:02:23.714 --rc genhtml_branch_coverage=1 00:02:23.714 --rc genhtml_function_coverage=1 00:02:23.714 --rc genhtml_legend=1 00:02:23.714 --rc geninfo_all_blocks=1 00:02:23.714 --no-external' 00:02:23.714 21:27:31 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:23.714 lcov: LCOV version 1.14 00:02:23.714 21:27:31 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:25.091 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:25.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:25.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:25.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:25.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:25.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:25.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:25.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:25.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:25.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:25.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:25.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:25.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:25.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:25.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:25.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:25.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:25.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:25.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:25.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:25.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:25.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:25.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:25.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:25.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:25.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:25.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:25.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:25.351 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:25.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:25.352 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:25.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:25.610 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:25.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:25.610 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:25.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:25.610 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:25.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:25.610 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:37.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:37.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:50.054 21:27:56 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:50.054 21:27:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:50.054 21:27:56 -- common/autotest_common.sh@10 -- # set +x 00:02:50.054 21:27:56 -- spdk/autotest.sh@91 -- # rm -f 00:02:50.054 21:27:56 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:51.024 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:51.024 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:51.024 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:51.024 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:51.024 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:51.024 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:51.024 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:51.024 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:51.024 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:51.024 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:51.024 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:51.024 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:51.024 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:51.024 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:51.024 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:51.282 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:51.282 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:51.282 21:27:59 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:51.282 21:27:59 -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:02:51.282 21:27:59 -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:02:51.282 21:27:59 -- common/autotest_common.sh@1668 -- # local nvme bdf 00:02:51.282 21:27:59 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:02:51.282 21:27:59 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:51.282 21:27:59 -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:02:51.282 21:27:59 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:51.282 21:27:59 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:02:51.282 21:27:59 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:51.282 21:27:59 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:51.282 21:27:59 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:51.282 21:27:59 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:51.282 21:27:59 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:51.282 21:27:59 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:51.282 No valid GPT data, bailing 00:02:51.282 21:27:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:51.282 21:27:59 -- scripts/common.sh@391 -- # pt= 00:02:51.282 21:27:59 -- scripts/common.sh@392 -- # return 1 00:02:51.282 21:27:59 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:51.282 1+0 records in 00:02:51.282 1+0 records out 00:02:51.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415857 s, 252 MB/s 00:02:51.282 21:27:59 -- spdk/autotest.sh@118 -- # sync 00:02:51.282 21:27:59 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:51.282 21:27:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:51.282 21:27:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:56.551 21:28:04 -- spdk/autotest.sh@124 -- # uname -s 00:02:56.551 21:28:04 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:56.551 21:28:04 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:56.551 21:28:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:56.551 21:28:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.551 21:28:04 -- common/autotest_common.sh@10 -- # set +x 00:02:56.551 ************************************ 00:02:56.551 START TEST setup.sh 00:02:56.551 ************************************ 00:02:56.551 21:28:04 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:56.551 * Looking for test storage... 00:02:56.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:56.551 21:28:04 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:56.551 21:28:04 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:56.551 21:28:04 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:56.551 21:28:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:56.551 21:28:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.551 21:28:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:56.551 ************************************ 00:02:56.551 START TEST acl 00:02:56.551 ************************************ 00:02:56.551 21:28:04 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:56.809 * Looking for test storage... 00:02:56.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:56.809 21:28:04 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:56.809 21:28:04 setup.sh.acl -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:02:56.809 21:28:04 setup.sh.acl -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:02:56.809 21:28:04 setup.sh.acl -- common/autotest_common.sh@1668 -- # local nvme bdf 00:02:56.809 21:28:04 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:02:56.809 21:28:04 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:56.809 21:28:04 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:02:56.809 21:28:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:56.809 21:28:04 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:02:56.809 21:28:04 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:56.809 21:28:04 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:56.809 21:28:04 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:56.809 21:28:04 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:56.809 21:28:04 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:56.809 21:28:04 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:56.809 21:28:04 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.095 21:28:07 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:00.095 21:28:07 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:00.095 21:28:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.095 21:28:07 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:00.095 21:28:07 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.095 21:28:07 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:02.626 Hugepages 00:03:02.626 node hugesize free / total 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 00:03:02.626 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:02.626 21:28:10 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:02.626 21:28:10 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.626 21:28:10 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.626 21:28:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:02.626 ************************************ 00:03:02.626 START TEST denied 00:03:02.626 ************************************ 00:03:02.626 21:28:10 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:02.626 21:28:10 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:02.626 21:28:10 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:02.626 21:28:10 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:02.626 21:28:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.626 21:28:10 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:05.157 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:05.157 21:28:13 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:05.157 21:28:13 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:05.157 21:28:13 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:05.157 21:28:13 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:05.157 21:28:13 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:05.157 21:28:13 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:05.157 21:28:13 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:05.157 21:28:13 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:05.157 21:28:13 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.157 21:28:13 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.362 00:03:09.362 real 0m6.542s 00:03:09.362 user 0m2.165s 00:03:09.362 sys 0m3.726s 00:03:09.362 21:28:16 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:09.362 21:28:16 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:09.362 ************************************ 00:03:09.362 END TEST denied 00:03:09.362 ************************************ 00:03:09.362 21:28:16 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:09.362 21:28:16 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.362 21:28:16 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.362 21:28:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:09.362 ************************************ 00:03:09.362 START TEST allowed 00:03:09.362 ************************************ 00:03:09.362 21:28:16 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:09.362 21:28:16 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:09.362 21:28:16 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:09.362 21:28:16 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.362 21:28:16 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:09.362 21:28:16 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:12.646 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:12.646 21:28:20 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:12.646 21:28:20 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:12.646 21:28:20 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:12.646 21:28:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:12.646 21:28:20 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.969 00:03:15.969 real 0m6.624s 00:03:15.969 user 0m2.142s 00:03:15.969 sys 0m3.653s 00:03:15.969 21:28:23 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:15.969 21:28:23 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:15.969 ************************************ 00:03:15.969 END TEST allowed 00:03:15.969 ************************************ 00:03:15.969 00:03:15.969 real 0m19.031s 00:03:15.969 user 0m6.464s 00:03:15.969 sys 0m11.270s 00:03:15.969 21:28:23 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:15.969 21:28:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:15.969 ************************************ 00:03:15.969 END TEST acl 00:03:15.969 ************************************ 00:03:15.969 21:28:23 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:15.969 21:28:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.969 21:28:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.969 21:28:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:15.969 ************************************ 00:03:15.969 START TEST hugepages 00:03:15.969 ************************************ 00:03:15.969 21:28:23 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:15.969 * Looking for test storage... 00:03:15.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 168362552 kB' 'MemAvailable: 171596088 kB' 'Buffers: 3896 kB' 'Cached: 14692600 kB' 'SwapCached: 0 kB' 'Active: 11545780 kB' 'Inactive: 3694312 kB' 'Active(anon): 11127824 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546920 kB' 'Mapped: 192172 kB' 'Shmem: 10584228 kB' 'KReclaimable: 530964 kB' 'Slab: 1186276 kB' 'SReclaimable: 530964 kB' 'SUnreclaim: 655312 kB' 'KernelStack: 20720 kB' 'PageTables: 9404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982020 kB' 'Committed_AS: 12670196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317416 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.969 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.970 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:15.971 21:28:23 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:15.971 21:28:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.971 21:28:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.971 21:28:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.971 ************************************ 00:03:15.971 START TEST default_setup 00:03:15.971 ************************************ 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.971 21:28:23 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.504 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:18.504 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:19.071 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170520532 kB' 'MemAvailable: 173754052 kB' 'Buffers: 3896 kB' 'Cached: 14692696 kB' 'SwapCached: 0 kB' 'Active: 11563408 kB' 'Inactive: 3694312 kB' 'Active(anon): 11145452 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564348 kB' 'Mapped: 192276 kB' 'Shmem: 10584324 kB' 'KReclaimable: 530932 kB' 'Slab: 1184572 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653640 kB' 'KernelStack: 20640 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12685280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317224 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.071 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.334 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170523804 kB' 'MemAvailable: 173757324 kB' 'Buffers: 3896 kB' 'Cached: 14692700 kB' 'SwapCached: 0 kB' 'Active: 11563624 kB' 'Inactive: 3694312 kB' 'Active(anon): 11145668 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564600 kB' 'Mapped: 192224 kB' 'Shmem: 10584328 kB' 'KReclaimable: 530932 kB' 'Slab: 1184552 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653620 kB' 'KernelStack: 20624 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12685300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.335 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.336 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170523924 kB' 'MemAvailable: 173757444 kB' 'Buffers: 3896 kB' 'Cached: 14692716 kB' 'SwapCached: 0 kB' 'Active: 11563592 kB' 'Inactive: 3694312 kB' 'Active(anon): 11145636 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564544 kB' 'Mapped: 192224 kB' 'Shmem: 10584344 kB' 'KReclaimable: 530932 kB' 'Slab: 1184572 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653640 kB' 'KernelStack: 20608 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12685320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.337 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.338 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:19.339 nr_hugepages=1024 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.339 resv_hugepages=0 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.339 surplus_hugepages=0 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.339 anon_hugepages=0 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170523168 kB' 'MemAvailable: 173756688 kB' 'Buffers: 3896 kB' 'Cached: 14692740 kB' 'SwapCached: 0 kB' 'Active: 11563596 kB' 'Inactive: 3694312 kB' 'Active(anon): 11145640 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564576 kB' 'Mapped: 192224 kB' 'Shmem: 10584368 kB' 'KReclaimable: 530932 kB' 'Slab: 1184572 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653640 kB' 'KernelStack: 20624 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12685344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.339 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.340 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91367332 kB' 'MemUsed: 6248296 kB' 'SwapCached: 0 kB' 'Active: 2587808 kB' 'Inactive: 216924 kB' 'Active(anon): 2425984 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2647244 kB' 'Mapped: 100196 kB' 'AnonPages: 160588 kB' 'Shmem: 2268496 kB' 'KernelStack: 10984 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354412 kB' 'Slab: 649428 kB' 'SReclaimable: 354412 kB' 'SUnreclaim: 295016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.341 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:19.342 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:19.343 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.343 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:19.343 21:28:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:19.343 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.343 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.343 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.343 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.343 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:19.343 node0=1024 expecting 1024 00:03:19.343 21:28:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:19.343 00:03:19.343 real 0m3.486s 00:03:19.343 user 0m1.003s 00:03:19.343 sys 0m1.625s 00:03:19.343 21:28:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.343 21:28:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:19.343 ************************************ 00:03:19.343 END TEST default_setup 00:03:19.343 ************************************ 00:03:19.343 21:28:27 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:19.343 21:28:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.343 21:28:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.343 21:28:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.343 ************************************ 00:03:19.343 START TEST per_node_1G_alloc 00:03:19.343 ************************************ 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.343 21:28:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.878 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.878 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.878 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.878 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.878 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.878 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.878 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.878 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.878 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.878 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.878 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.143 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.143 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.143 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.143 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.143 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.143 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.143 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170540540 kB' 'MemAvailable: 173774060 kB' 'Buffers: 3896 kB' 'Cached: 14692844 kB' 'SwapCached: 0 kB' 'Active: 11564392 kB' 'Inactive: 3694312 kB' 'Active(anon): 11146436 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565812 kB' 'Mapped: 192300 kB' 'Shmem: 10584472 kB' 'KReclaimable: 530932 kB' 'Slab: 1184204 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653272 kB' 'KernelStack: 20624 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12685948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.144 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170542000 kB' 'MemAvailable: 173775520 kB' 'Buffers: 3896 kB' 'Cached: 14692848 kB' 'SwapCached: 0 kB' 'Active: 11564304 kB' 'Inactive: 3694312 kB' 'Active(anon): 11146348 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565756 kB' 'Mapped: 192312 kB' 'Shmem: 10584476 kB' 'KReclaimable: 530932 kB' 'Slab: 1184228 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653296 kB' 'KernelStack: 20608 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12685968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.145 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.146 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170542096 kB' 'MemAvailable: 173775616 kB' 'Buffers: 3896 kB' 'Cached: 14692864 kB' 'SwapCached: 0 kB' 'Active: 11564336 kB' 'Inactive: 3694312 kB' 'Active(anon): 11146380 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565744 kB' 'Mapped: 192236 kB' 'Shmem: 10584492 kB' 'KReclaimable: 530932 kB' 'Slab: 1184220 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653288 kB' 'KernelStack: 20608 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12685992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.147 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.148 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.149 nr_hugepages=1024 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.149 resv_hugepages=0 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.149 surplus_hugepages=0 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.149 anon_hugepages=0 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170542096 kB' 'MemAvailable: 173775616 kB' 'Buffers: 3896 kB' 'Cached: 14692908 kB' 'SwapCached: 0 kB' 'Active: 11563972 kB' 'Inactive: 3694312 kB' 'Active(anon): 11146016 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565344 kB' 'Mapped: 192236 kB' 'Shmem: 10584536 kB' 'KReclaimable: 530932 kB' 'Slab: 1184220 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653288 kB' 'KernelStack: 20592 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12686012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.149 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.150 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92396940 kB' 'MemUsed: 5218688 kB' 'SwapCached: 0 kB' 'Active: 2589076 kB' 'Inactive: 216924 kB' 'Active(anon): 2427252 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2647296 kB' 'Mapped: 100208 kB' 'AnonPages: 162168 kB' 'Shmem: 2268548 kB' 'KernelStack: 10968 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354412 kB' 'Slab: 649228 kB' 'SReclaimable: 354412 kB' 'SUnreclaim: 294816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.151 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.152 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 78145360 kB' 'MemUsed: 15620148 kB' 'SwapCached: 0 kB' 'Active: 8975644 kB' 'Inactive: 3477388 kB' 'Active(anon): 8719512 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12049532 kB' 'Mapped: 92028 kB' 'AnonPages: 403916 kB' 'Shmem: 8316012 kB' 'KernelStack: 9656 kB' 'PageTables: 5292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176520 kB' 'Slab: 534992 kB' 'SReclaimable: 176520 kB' 'SUnreclaim: 358472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.153 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.154 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.413 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:22.414 node0=512 expecting 512 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:22.414 node1=512 expecting 512 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:22.414 00:03:22.414 real 0m2.870s 00:03:22.414 user 0m1.152s 00:03:22.414 sys 0m1.790s 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.414 21:28:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:22.414 ************************************ 00:03:22.414 END TEST per_node_1G_alloc 00:03:22.414 ************************************ 00:03:22.414 21:28:30 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:22.414 21:28:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.414 21:28:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.414 21:28:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.414 ************************************ 00:03:22.414 START TEST even_2G_alloc 00:03:22.414 ************************************ 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.414 21:28:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.951 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.951 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.951 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170573668 kB' 'MemAvailable: 173807188 kB' 'Buffers: 3896 kB' 'Cached: 14692992 kB' 'SwapCached: 0 kB' 'Active: 11564472 kB' 'Inactive: 3694312 kB' 'Active(anon): 11146516 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565108 kB' 'Mapped: 191652 kB' 'Shmem: 10584620 kB' 'KReclaimable: 530932 kB' 'Slab: 1184036 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653104 kB' 'KernelStack: 20528 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12674216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317224 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.951 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.952 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170572580 kB' 'MemAvailable: 173806100 kB' 'Buffers: 3896 kB' 'Cached: 14692996 kB' 'SwapCached: 0 kB' 'Active: 11567068 kB' 'Inactive: 3694312 kB' 'Active(anon): 11149112 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567732 kB' 'Mapped: 191968 kB' 'Shmem: 10584624 kB' 'KReclaimable: 530932 kB' 'Slab: 1183940 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653008 kB' 'KernelStack: 20544 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12678972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317212 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.953 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.954 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.955 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.955 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.955 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170575796 kB' 'MemAvailable: 173809316 kB' 'Buffers: 3896 kB' 'Cached: 14692996 kB' 'SwapCached: 0 kB' 'Active: 11561764 kB' 'Inactive: 3694312 kB' 'Active(anon): 11143808 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562456 kB' 'Mapped: 191464 kB' 'Shmem: 10584624 kB' 'KReclaimable: 530932 kB' 'Slab: 1184000 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653068 kB' 'KernelStack: 20528 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12671752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.221 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:25.222 nr_hugepages=1024 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.222 resv_hugepages=0 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.222 surplus_hugepages=0 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.222 anon_hugepages=0 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170577084 kB' 'MemAvailable: 173810604 kB' 'Buffers: 3896 kB' 'Cached: 14692996 kB' 'SwapCached: 0 kB' 'Active: 11562196 kB' 'Inactive: 3694312 kB' 'Active(anon): 11144240 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562900 kB' 'Mapped: 191464 kB' 'Shmem: 10584624 kB' 'KReclaimable: 530932 kB' 'Slab: 1184000 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653068 kB' 'KernelStack: 20624 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12671404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317256 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.222 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.223 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92425152 kB' 'MemUsed: 5190476 kB' 'SwapCached: 0 kB' 'Active: 2589096 kB' 'Inactive: 216924 kB' 'Active(anon): 2427272 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2647296 kB' 'Mapped: 99868 kB' 'AnonPages: 161892 kB' 'Shmem: 2268548 kB' 'KernelStack: 11304 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354412 kB' 'Slab: 649484 kB' 'SReclaimable: 354412 kB' 'SUnreclaim: 295072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.224 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.225 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 78150428 kB' 'MemUsed: 15615080 kB' 'SwapCached: 0 kB' 'Active: 8973216 kB' 'Inactive: 3477388 kB' 'Active(anon): 8717084 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12049672 kB' 'Mapped: 91264 kB' 'AnonPages: 401028 kB' 'Shmem: 8316152 kB' 'KernelStack: 9496 kB' 'PageTables: 4668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176520 kB' 'Slab: 534516 kB' 'SReclaimable: 176520 kB' 'SUnreclaim: 357996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.226 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:25.227 node0=512 expecting 512 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:25.227 node1=512 expecting 512 00:03:25.227 21:28:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:25.227 00:03:25.227 real 0m2.853s 00:03:25.227 user 0m1.105s 00:03:25.228 sys 0m1.791s 00:03:25.228 21:28:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.228 21:28:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.228 ************************************ 00:03:25.228 END TEST even_2G_alloc 00:03:25.228 ************************************ 00:03:25.228 21:28:33 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:25.228 21:28:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.228 21:28:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.228 21:28:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.228 ************************************ 00:03:25.228 START TEST odd_alloc 00:03:25.228 ************************************ 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.228 21:28:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.766 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.766 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.766 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170561404 kB' 'MemAvailable: 173794924 kB' 'Buffers: 3896 kB' 'Cached: 14693144 kB' 'SwapCached: 0 kB' 'Active: 11562492 kB' 'Inactive: 3694312 kB' 'Active(anon): 11144536 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563084 kB' 'Mapped: 191200 kB' 'Shmem: 10584772 kB' 'KReclaimable: 530932 kB' 'Slab: 1184636 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653704 kB' 'KernelStack: 20608 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12670760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.766 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.767 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170564884 kB' 'MemAvailable: 173798404 kB' 'Buffers: 3896 kB' 'Cached: 14693148 kB' 'SwapCached: 0 kB' 'Active: 11562004 kB' 'Inactive: 3694312 kB' 'Active(anon): 11144048 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562576 kB' 'Mapped: 191224 kB' 'Shmem: 10584776 kB' 'KReclaimable: 530932 kB' 'Slab: 1184696 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653764 kB' 'KernelStack: 20560 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12670780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317112 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.768 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.769 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170565412 kB' 'MemAvailable: 173798932 kB' 'Buffers: 3896 kB' 'Cached: 14693164 kB' 'SwapCached: 0 kB' 'Active: 11561808 kB' 'Inactive: 3694312 kB' 'Active(anon): 11143852 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562352 kB' 'Mapped: 191144 kB' 'Shmem: 10584792 kB' 'KReclaimable: 530932 kB' 'Slab: 1184640 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653708 kB' 'KernelStack: 20544 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12670800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317112 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.770 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.771 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:27.772 nr_hugepages=1025 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.772 resv_hugepages=0 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.772 surplus_hugepages=0 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.772 anon_hugepages=0 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170564404 kB' 'MemAvailable: 173797924 kB' 'Buffers: 3896 kB' 'Cached: 14693184 kB' 'SwapCached: 0 kB' 'Active: 11561960 kB' 'Inactive: 3694312 kB' 'Active(anon): 11144004 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562472 kB' 'Mapped: 191144 kB' 'Shmem: 10584812 kB' 'KReclaimable: 530932 kB' 'Slab: 1184640 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653708 kB' 'KernelStack: 20528 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12673216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.772 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:27.773 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92403488 kB' 'MemUsed: 5212140 kB' 'SwapCached: 0 kB' 'Active: 2589980 kB' 'Inactive: 216924 kB' 'Active(anon): 2428156 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2647328 kB' 'Mapped: 99880 kB' 'AnonPages: 162732 kB' 'Shmem: 2268580 kB' 'KernelStack: 10984 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354412 kB' 'Slab: 650084 kB' 'SReclaimable: 354412 kB' 'SUnreclaim: 295672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.774 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 78160584 kB' 'MemUsed: 15604924 kB' 'SwapCached: 0 kB' 'Active: 8972704 kB' 'Inactive: 3477388 kB' 'Active(anon): 8716572 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12049772 kB' 'Mapped: 91272 kB' 'AnonPages: 400388 kB' 'Shmem: 8316252 kB' 'KernelStack: 9720 kB' 'PageTables: 5264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176520 kB' 'Slab: 534556 kB' 'SReclaimable: 176520 kB' 'SUnreclaim: 358036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.775 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.776 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.035 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:28.036 node0=512 expecting 513 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:28.036 node1=513 expecting 512 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:28.036 00:03:28.036 real 0m2.645s 00:03:28.036 user 0m1.031s 00:03:28.036 sys 0m1.627s 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.036 21:28:35 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:28.036 ************************************ 00:03:28.036 END TEST odd_alloc 00:03:28.036 ************************************ 00:03:28.036 21:28:35 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:28.036 21:28:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.036 21:28:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.036 21:28:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:28.036 ************************************ 00:03:28.036 START TEST custom_alloc 00:03:28.036 ************************************ 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:28.036 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.037 21:28:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.570 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.570 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.570 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.570 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:30.570 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:30.570 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.570 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.570 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.570 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.570 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.570 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.570 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.570 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169500520 kB' 'MemAvailable: 172734040 kB' 'Buffers: 3896 kB' 'Cached: 14693292 kB' 'SwapCached: 0 kB' 'Active: 11564256 kB' 'Inactive: 3694312 kB' 'Active(anon): 11146300 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564088 kB' 'Mapped: 191288 kB' 'Shmem: 10584920 kB' 'KReclaimable: 530932 kB' 'Slab: 1184464 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653532 kB' 'KernelStack: 20992 kB' 'PageTables: 10032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12673908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317416 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.835 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169503348 kB' 'MemAvailable: 172736868 kB' 'Buffers: 3896 kB' 'Cached: 14693296 kB' 'SwapCached: 0 kB' 'Active: 11564084 kB' 'Inactive: 3694312 kB' 'Active(anon): 11146128 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564396 kB' 'Mapped: 191288 kB' 'Shmem: 10584924 kB' 'KReclaimable: 530932 kB' 'Slab: 1184348 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653416 kB' 'KernelStack: 21008 kB' 'PageTables: 9496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12673928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317352 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.836 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.837 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169504144 kB' 'MemAvailable: 172737664 kB' 'Buffers: 3896 kB' 'Cached: 14693308 kB' 'SwapCached: 0 kB' 'Active: 11562224 kB' 'Inactive: 3694312 kB' 'Active(anon): 11144268 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562428 kB' 'Mapped: 191156 kB' 'Shmem: 10584936 kB' 'KReclaimable: 530932 kB' 'Slab: 1184356 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653424 kB' 'KernelStack: 20608 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12674836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317336 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.838 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.839 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:30.840 nr_hugepages=1536 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.840 resv_hugepages=0 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.840 surplus_hugepages=0 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.840 anon_hugepages=0 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.840 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169506028 kB' 'MemAvailable: 172739548 kB' 'Buffers: 3896 kB' 'Cached: 14693336 kB' 'SwapCached: 0 kB' 'Active: 11563352 kB' 'Inactive: 3694312 kB' 'Active(anon): 11145396 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563600 kB' 'Mapped: 191164 kB' 'Shmem: 10584964 kB' 'KReclaimable: 530932 kB' 'Slab: 1184532 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 653600 kB' 'KernelStack: 20640 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12673968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317336 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.841 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92407012 kB' 'MemUsed: 5208616 kB' 'SwapCached: 0 kB' 'Active: 2589188 kB' 'Inactive: 216924 kB' 'Active(anon): 2427364 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2647456 kB' 'Mapped: 99892 kB' 'AnonPages: 161788 kB' 'Shmem: 2268708 kB' 'KernelStack: 10936 kB' 'PageTables: 3636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354412 kB' 'Slab: 649668 kB' 'SReclaimable: 354412 kB' 'SUnreclaim: 295256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.842 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.843 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77096320 kB' 'MemUsed: 16669188 kB' 'SwapCached: 0 kB' 'Active: 8974172 kB' 'Inactive: 3477388 kB' 'Active(anon): 8718040 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12049792 kB' 'Mapped: 91264 kB' 'AnonPages: 401828 kB' 'Shmem: 8316272 kB' 'KernelStack: 9816 kB' 'PageTables: 5284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176520 kB' 'Slab: 534856 kB' 'SReclaimable: 176520 kB' 'SUnreclaim: 358336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.844 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:30.845 node0=512 expecting 512 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:30.845 node1=1024 expecting 1024 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:30.845 00:03:30.845 real 0m2.916s 00:03:30.845 user 0m1.223s 00:03:30.845 sys 0m1.764s 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.845 21:28:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.845 ************************************ 00:03:30.845 END TEST custom_alloc 00:03:30.845 ************************************ 00:03:30.845 21:28:38 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:30.845 21:28:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.845 21:28:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.845 21:28:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.845 ************************************ 00:03:30.845 START TEST no_shrink_alloc 00:03:30.845 ************************************ 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.845 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.846 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:30.846 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.846 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:30.846 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:30.846 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:30.846 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.846 21:28:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.416 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.416 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:33.416 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.416 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.416 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.416 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.417 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.417 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.417 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.417 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.417 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.417 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.417 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.680 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.680 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.680 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.680 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170551068 kB' 'MemAvailable: 173784588 kB' 'Buffers: 3896 kB' 'Cached: 14693448 kB' 'SwapCached: 0 kB' 'Active: 11564832 kB' 'Inactive: 3694312 kB' 'Active(anon): 11146876 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564468 kB' 'Mapped: 191824 kB' 'Shmem: 10585076 kB' 'KReclaimable: 530932 kB' 'Slab: 1183768 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 652836 kB' 'KernelStack: 20816 kB' 'PageTables: 9300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12672924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.680 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170547408 kB' 'MemAvailable: 173780928 kB' 'Buffers: 3896 kB' 'Cached: 14693448 kB' 'SwapCached: 0 kB' 'Active: 11567404 kB' 'Inactive: 3694312 kB' 'Active(anon): 11149448 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567104 kB' 'Mapped: 191772 kB' 'Shmem: 10585076 kB' 'KReclaimable: 530932 kB' 'Slab: 1183608 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 652676 kB' 'KernelStack: 20528 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12677032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.681 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.682 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170542704 kB' 'MemAvailable: 173776224 kB' 'Buffers: 3896 kB' 'Cached: 14693448 kB' 'SwapCached: 0 kB' 'Active: 11568540 kB' 'Inactive: 3694312 kB' 'Active(anon): 11150584 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568244 kB' 'Mapped: 192036 kB' 'Shmem: 10585076 kB' 'KReclaimable: 530932 kB' 'Slab: 1183624 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 652692 kB' 'KernelStack: 20544 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12677992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317116 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.683 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.684 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.685 nr_hugepages=1024 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.685 resv_hugepages=0 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.685 surplus_hugepages=0 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.685 anon_hugepages=0 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.685 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170542452 kB' 'MemAvailable: 173775972 kB' 'Buffers: 3896 kB' 'Cached: 14693488 kB' 'SwapCached: 0 kB' 'Active: 11562516 kB' 'Inactive: 3694312 kB' 'Active(anon): 11144560 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562752 kB' 'Mapped: 191176 kB' 'Shmem: 10585116 kB' 'KReclaimable: 530932 kB' 'Slab: 1183624 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 652692 kB' 'KernelStack: 20544 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12671892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317112 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.686 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.687 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91366112 kB' 'MemUsed: 6249516 kB' 'SwapCached: 0 kB' 'Active: 2588876 kB' 'Inactive: 216924 kB' 'Active(anon): 2427052 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2647584 kB' 'Mapped: 99912 kB' 'AnonPages: 161348 kB' 'Shmem: 2268836 kB' 'KernelStack: 11000 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354412 kB' 'Slab: 649136 kB' 'SReclaimable: 354412 kB' 'SUnreclaim: 294724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.688 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:33.689 node0=1024 expecting 1024 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.689 21:28:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.228 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:36.228 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.228 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.228 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170544152 kB' 'MemAvailable: 173777672 kB' 'Buffers: 3896 kB' 'Cached: 14693568 kB' 'SwapCached: 0 kB' 'Active: 11564232 kB' 'Inactive: 3694312 kB' 'Active(anon): 11146276 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563872 kB' 'Mapped: 191312 kB' 'Shmem: 10585196 kB' 'KReclaimable: 530932 kB' 'Slab: 1183408 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 652476 kB' 'KernelStack: 20768 kB' 'PageTables: 9252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12675160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317304 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.228 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.229 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170547876 kB' 'MemAvailable: 173781396 kB' 'Buffers: 3896 kB' 'Cached: 14693568 kB' 'SwapCached: 0 kB' 'Active: 11563824 kB' 'Inactive: 3694312 kB' 'Active(anon): 11145868 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563320 kB' 'Mapped: 191304 kB' 'Shmem: 10585196 kB' 'KReclaimable: 530932 kB' 'Slab: 1183316 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 652384 kB' 'KernelStack: 20560 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12675176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317288 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.230 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.231 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170546524 kB' 'MemAvailable: 173780044 kB' 'Buffers: 3896 kB' 'Cached: 14693592 kB' 'SwapCached: 0 kB' 'Active: 11563540 kB' 'Inactive: 3694312 kB' 'Active(anon): 11145584 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563504 kB' 'Mapped: 191184 kB' 'Shmem: 10585220 kB' 'KReclaimable: 530932 kB' 'Slab: 1183556 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 652624 kB' 'KernelStack: 20720 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12675200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317288 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.232 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.233 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.234 nr_hugepages=1024 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.234 resv_hugepages=0 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.234 surplus_hugepages=0 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.234 anon_hugepages=0 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170545092 kB' 'MemAvailable: 173778612 kB' 'Buffers: 3896 kB' 'Cached: 14693612 kB' 'SwapCached: 0 kB' 'Active: 11563456 kB' 'Inactive: 3694312 kB' 'Active(anon): 11145500 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563416 kB' 'Mapped: 191184 kB' 'Shmem: 10585240 kB' 'KReclaimable: 530932 kB' 'Slab: 1183516 kB' 'SReclaimable: 530932 kB' 'SUnreclaim: 652584 kB' 'KernelStack: 20624 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12675220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317336 kB' 'VmallocChunk: 0 kB' 'Percpu: 118272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3918804 kB' 'DirectMap2M: 33509376 kB' 'DirectMap1G: 164626432 kB' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.234 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.235 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91354236 kB' 'MemUsed: 6261392 kB' 'SwapCached: 0 kB' 'Active: 2590360 kB' 'Inactive: 216924 kB' 'Active(anon): 2428536 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2647696 kB' 'Mapped: 99920 kB' 'AnonPages: 162720 kB' 'Shmem: 2268948 kB' 'KernelStack: 11000 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354412 kB' 'Slab: 649176 kB' 'SReclaimable: 354412 kB' 'SUnreclaim: 294764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.236 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.237 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.238 node0=1024 expecting 1024 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.238 00:03:36.238 real 0m5.386s 00:03:36.238 user 0m2.020s 00:03:36.238 sys 0m3.344s 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.238 21:28:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.238 ************************************ 00:03:36.238 END TEST no_shrink_alloc 00:03:36.238 ************************************ 00:03:36.497 21:28:44 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:36.497 21:28:44 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:36.497 21:28:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:36.497 21:28:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.497 21:28:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:36.497 21:28:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.497 21:28:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:36.497 21:28:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:36.497 21:28:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.497 21:28:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:36.497 21:28:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.497 21:28:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:36.497 21:28:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:36.497 21:28:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:36.497 00:03:36.497 real 0m20.654s 00:03:36.497 user 0m7.732s 00:03:36.497 sys 0m12.275s 00:03:36.497 21:28:44 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.497 21:28:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.497 ************************************ 00:03:36.497 END TEST hugepages 00:03:36.497 ************************************ 00:03:36.497 21:28:44 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:36.497 21:28:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.497 21:28:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.497 21:28:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:36.497 ************************************ 00:03:36.497 START TEST driver 00:03:36.497 ************************************ 00:03:36.497 21:28:44 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:36.497 * Looking for test storage... 00:03:36.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:36.497 21:28:44 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:36.497 21:28:44 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.497 21:28:44 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.682 21:28:48 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:40.682 21:28:48 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.682 21:28:48 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.682 21:28:48 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:40.682 ************************************ 00:03:40.682 START TEST guess_driver 00:03:40.682 ************************************ 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:40.682 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:40.682 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:40.682 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:40.682 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:40.682 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:40.682 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:40.682 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:40.682 Looking for driver=vfio-pci 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.682 21:28:48 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.215 21:28:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.781 21:28:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.781 21:28:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.781 21:28:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.040 21:28:51 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:44.040 21:28:51 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:44.040 21:28:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.040 21:28:51 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.230 00:03:48.230 real 0m7.440s 00:03:48.231 user 0m2.055s 00:03:48.231 sys 0m3.795s 00:03:48.231 21:28:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.231 21:28:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:48.231 ************************************ 00:03:48.231 END TEST guess_driver 00:03:48.231 ************************************ 00:03:48.231 00:03:48.231 real 0m11.433s 00:03:48.231 user 0m3.231s 00:03:48.231 sys 0m5.922s 00:03:48.231 21:28:55 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.231 21:28:55 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:48.231 ************************************ 00:03:48.231 END TEST driver 00:03:48.231 ************************************ 00:03:48.231 21:28:55 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:48.231 21:28:55 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.231 21:28:55 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.231 21:28:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:48.231 ************************************ 00:03:48.231 START TEST devices 00:03:48.231 ************************************ 00:03:48.231 21:28:55 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:48.231 * Looking for test storage... 00:03:48.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:48.231 21:28:56 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:48.231 21:28:56 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:48.231 21:28:56 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.231 21:28:56 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.516 21:28:59 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:51.516 21:28:59 setup.sh.devices -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:03:51.516 21:28:59 setup.sh.devices -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:03:51.516 21:28:59 setup.sh.devices -- common/autotest_common.sh@1668 -- # local nvme bdf 00:03:51.516 21:28:59 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:51.517 21:28:59 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:51.517 21:28:59 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:03:51.517 21:28:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.517 21:28:59 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:51.517 21:28:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:51.517 21:28:59 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:51.517 No valid GPT data, bailing 00:03:51.517 21:28:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:51.517 21:28:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:51.517 21:28:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:51.517 21:28:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:51.517 21:28:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:51.517 21:28:59 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:51.517 21:28:59 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:51.517 21:28:59 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.517 21:28:59 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.517 21:28:59 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:51.517 ************************************ 00:03:51.517 START TEST nvme_mount 00:03:51.517 ************************************ 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:51.517 21:28:59 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:52.085 Creating new GPT entries in memory. 00:03:52.085 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:52.085 other utilities. 00:03:52.085 21:29:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:52.085 21:29:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.085 21:29:00 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:52.085 21:29:00 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:52.085 21:29:00 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:53.460 Creating new GPT entries in memory. 00:03:53.460 The operation has completed successfully. 00:03:53.460 21:29:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:53.460 21:29:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.460 21:29:01 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2857368 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.461 21:29:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.992 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.993 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.993 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:55.993 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:55.993 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.993 21:29:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.267 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:56.267 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:56.267 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:56.267 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.267 21:29:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:58.810 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.810 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:58.810 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:58.810 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.810 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.810 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.810 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.810 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.810 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.810 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.811 21:29:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.345 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.346 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.346 00:04:01.346 real 0m10.099s 00:04:01.346 user 0m2.773s 00:04:01.346 sys 0m5.100s 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.346 21:29:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:01.346 ************************************ 00:04:01.346 END TEST nvme_mount 00:04:01.346 ************************************ 00:04:01.346 21:29:09 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:01.346 21:29:09 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.346 21:29:09 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.346 21:29:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:01.346 ************************************ 00:04:01.346 START TEST dm_mount 00:04:01.346 ************************************ 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:01.346 21:29:09 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:02.282 Creating new GPT entries in memory. 00:04:02.282 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:02.282 other utilities. 00:04:02.282 21:29:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:02.282 21:29:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.282 21:29:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:02.282 21:29:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:02.282 21:29:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:03.220 Creating new GPT entries in memory. 00:04:03.220 The operation has completed successfully. 00:04:03.220 21:29:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:03.220 21:29:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.220 21:29:11 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:03.220 21:29:11 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:03.220 21:29:11 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:04.601 The operation has completed successfully. 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2861359 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.601 21:29:12 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.602 21:29:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:07.137 21:29:14 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.137 21:29:15 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.138 21:29:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.672 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:09.673 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:09.673 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.673 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:09.673 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:09.673 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:09.673 21:29:17 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:09.673 00:04:09.673 real 0m8.377s 00:04:09.673 user 0m2.007s 00:04:09.673 sys 0m3.408s 00:04:09.673 21:29:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.673 21:29:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:09.673 ************************************ 00:04:09.673 END TEST dm_mount 00:04:09.673 ************************************ 00:04:09.673 21:29:17 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:09.673 21:29:17 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:09.673 21:29:17 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.673 21:29:17 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.673 21:29:17 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:09.673 21:29:17 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:09.673 21:29:17 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:09.932 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:09.932 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:09.932 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:09.932 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:09.932 21:29:17 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:09.932 21:29:17 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.932 21:29:17 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:09.932 21:29:17 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.932 21:29:17 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:09.932 21:29:17 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:09.932 21:29:17 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:09.932 00:04:09.932 real 0m22.083s 00:04:09.932 user 0m6.044s 00:04:09.932 sys 0m10.737s 00:04:09.932 21:29:18 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.932 21:29:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:09.932 ************************************ 00:04:09.932 END TEST devices 00:04:09.932 ************************************ 00:04:09.932 00:04:09.932 real 1m13.557s 00:04:09.932 user 0m23.620s 00:04:09.932 sys 0m40.441s 00:04:09.932 21:29:18 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.932 21:29:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:09.932 ************************************ 00:04:09.932 END TEST setup.sh 00:04:09.932 ************************************ 00:04:10.191 21:29:18 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:12.724 Hugepages 00:04:12.724 node hugesize free / total 00:04:12.724 node0 1048576kB 0 / 0 00:04:12.724 node0 2048kB 2048 / 2048 00:04:12.724 node1 1048576kB 0 / 0 00:04:12.724 node1 2048kB 0 / 0 00:04:12.724 00:04:12.724 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:12.724 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:12.724 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:12.725 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:12.725 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:12.725 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:12.725 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:12.725 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:12.725 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:12.725 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:12.725 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:12.725 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:12.725 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:12.725 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:12.725 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:12.725 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:12.725 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:12.725 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:12.725 21:29:20 -- spdk/autotest.sh@130 -- # uname -s 00:04:12.725 21:29:20 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:12.725 21:29:20 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:12.725 21:29:20 -- common/autotest_common.sh@1529 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:15.257 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:15.257 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:16.190 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:16.448 21:29:24 -- common/autotest_common.sh@1530 -- # sleep 1 00:04:17.415 21:29:25 -- common/autotest_common.sh@1531 -- # bdfs=() 00:04:17.415 21:29:25 -- common/autotest_common.sh@1531 -- # local bdfs 00:04:17.416 21:29:25 -- common/autotest_common.sh@1532 -- # bdfs=($(get_nvme_bdfs)) 00:04:17.416 21:29:25 -- common/autotest_common.sh@1532 -- # get_nvme_bdfs 00:04:17.416 21:29:25 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:17.416 21:29:25 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:17.416 21:29:25 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:17.416 21:29:25 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:17.416 21:29:25 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:17.416 21:29:25 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:04:17.416 21:29:25 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:5e:00.0 00:04:17.416 21:29:25 -- common/autotest_common.sh@1534 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.947 Waiting for block devices as requested 00:04:19.947 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:19.947 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:19.947 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:19.947 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:19.947 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:19.947 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:20.206 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:20.206 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:20.206 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:20.206 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:20.465 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:20.465 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:20.465 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:20.725 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:20.725 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:20.725 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:20.725 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:20.984 21:29:28 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:04:20.984 21:29:28 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:20.984 21:29:28 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 00:04:20.984 21:29:28 -- common/autotest_common.sh@1500 -- # grep 0000:5e:00.0/nvme/nvme 00:04:20.984 21:29:28 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:20.984 21:29:28 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:20.984 21:29:28 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:20.984 21:29:28 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme0 00:04:20.984 21:29:28 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme0 00:04:20.984 21:29:28 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme0 ]] 00:04:20.984 21:29:28 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme0 00:04:20.984 21:29:28 -- common/autotest_common.sh@1543 -- # grep oacs 00:04:20.984 21:29:28 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:04:20.984 21:29:28 -- common/autotest_common.sh@1543 -- # oacs=' 0xe' 00:04:20.984 21:29:28 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:04:20.984 21:29:28 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:04:20.984 21:29:28 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme0 00:04:20.984 21:29:28 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:04:20.984 21:29:28 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:04:20.984 21:29:28 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:04:20.984 21:29:28 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:04:20.984 21:29:28 -- common/autotest_common.sh@1555 -- # continue 00:04:20.984 21:29:28 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:20.984 21:29:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:20.984 21:29:28 -- common/autotest_common.sh@10 -- # set +x 00:04:20.984 21:29:28 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:20.984 21:29:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:20.984 21:29:28 -- common/autotest_common.sh@10 -- # set +x 00:04:20.984 21:29:28 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:23.529 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:23.529 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:24.098 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:24.357 21:29:32 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:24.357 21:29:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.357 21:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:24.357 21:29:32 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:24.357 21:29:32 -- common/autotest_common.sh@1589 -- # mapfile -t bdfs 00:04:24.357 21:29:32 -- common/autotest_common.sh@1589 -- # get_nvme_bdfs_by_id 0x0a54 00:04:24.357 21:29:32 -- common/autotest_common.sh@1575 -- # bdfs=() 00:04:24.357 21:29:32 -- common/autotest_common.sh@1575 -- # local bdfs 00:04:24.357 21:29:32 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs 00:04:24.357 21:29:32 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:24.357 21:29:32 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:24.357 21:29:32 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.357 21:29:32 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:24.357 21:29:32 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:24.357 21:29:32 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:04:24.357 21:29:32 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:5e:00.0 00:04:24.357 21:29:32 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:04:24.357 21:29:32 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:24.357 21:29:32 -- common/autotest_common.sh@1578 -- # device=0x0a54 00:04:24.357 21:29:32 -- common/autotest_common.sh@1579 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:24.357 21:29:32 -- common/autotest_common.sh@1580 -- # bdfs+=($bdf) 00:04:24.357 21:29:32 -- common/autotest_common.sh@1584 -- # printf '%s\n' 0000:5e:00.0 00:04:24.357 21:29:32 -- common/autotest_common.sh@1590 -- # [[ -z 0000:5e:00.0 ]] 00:04:24.357 21:29:32 -- common/autotest_common.sh@1595 -- # spdk_tgt_pid=2869996 00:04:24.357 21:29:32 -- common/autotest_common.sh@1596 -- # waitforlisten 2869996 00:04:24.357 21:29:32 -- common/autotest_common.sh@829 -- # '[' -z 2869996 ']' 00:04:24.357 21:29:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.357 21:29:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.357 21:29:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.357 21:29:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.357 21:29:32 -- common/autotest_common.sh@10 -- # set +x 00:04:24.357 21:29:32 -- common/autotest_common.sh@1594 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.357 [2024-07-24 21:29:32.385035] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:04:24.357 [2024-07-24 21:29:32.385084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2869996 ] 00:04:24.357 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.357 [2024-07-24 21:29:32.438573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.616 [2024-07-24 21:29:32.520513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.184 21:29:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.184 21:29:33 -- common/autotest_common.sh@862 -- # return 0 00:04:25.184 21:29:33 -- common/autotest_common.sh@1598 -- # bdf_id=0 00:04:25.184 21:29:33 -- common/autotest_common.sh@1599 -- # for bdf in "${bdfs[@]}" 00:04:25.184 21:29:33 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:28.475 nvme0n1 00:04:28.475 21:29:36 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:28.475 [2024-07-24 21:29:36.309345] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:28.475 request: 00:04:28.475 { 00:04:28.475 "nvme_ctrlr_name": "nvme0", 00:04:28.475 "password": "test", 00:04:28.475 "method": "bdev_nvme_opal_revert", 00:04:28.475 "req_id": 1 00:04:28.475 } 00:04:28.475 Got JSON-RPC error response 00:04:28.475 response: 00:04:28.475 { 00:04:28.475 "code": -32602, 00:04:28.475 "message": "Invalid parameters" 00:04:28.475 } 00:04:28.475 21:29:36 -- common/autotest_common.sh@1602 -- # true 00:04:28.475 21:29:36 -- common/autotest_common.sh@1603 -- # (( ++bdf_id )) 00:04:28.475 21:29:36 -- common/autotest_common.sh@1606 -- # killprocess 2869996 00:04:28.475 21:29:36 -- common/autotest_common.sh@948 -- # '[' -z 2869996 ']' 00:04:28.475 21:29:36 -- common/autotest_common.sh@952 -- # kill -0 2869996 00:04:28.475 21:29:36 -- common/autotest_common.sh@953 -- # uname 00:04:28.475 21:29:36 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:28.475 21:29:36 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2869996 00:04:28.475 21:29:36 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:28.475 21:29:36 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:28.475 21:29:36 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2869996' 00:04:28.475 killing process with pid 2869996 00:04:28.475 21:29:36 -- common/autotest_common.sh@967 -- # kill 2869996 00:04:28.475 21:29:36 -- common/autotest_common.sh@972 -- # wait 2869996 00:04:30.383 21:29:37 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:30.383 21:29:37 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:30.383 21:29:37 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:30.383 21:29:37 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:30.383 21:29:37 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:30.383 21:29:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.383 21:29:37 -- common/autotest_common.sh@10 -- # set +x 00:04:30.383 21:29:37 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:30.383 21:29:37 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:30.383 21:29:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.383 21:29:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.383 21:29:37 -- common/autotest_common.sh@10 -- # set +x 00:04:30.383 ************************************ 00:04:30.383 START TEST env 00:04:30.383 ************************************ 00:04:30.383 21:29:38 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:30.383 * Looking for test storage... 00:04:30.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:30.383 21:29:38 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:30.383 21:29:38 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.383 21:29:38 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.383 21:29:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.383 ************************************ 00:04:30.383 START TEST env_memory 00:04:30.383 ************************************ 00:04:30.383 21:29:38 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:30.383 00:04:30.383 00:04:30.383 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.383 http://cunit.sourceforge.net/ 00:04:30.383 00:04:30.383 00:04:30.383 Suite: memory 00:04:30.383 Test: alloc and free memory map ...[2024-07-24 21:29:38.183344] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:30.383 passed 00:04:30.383 Test: mem map translation ...[2024-07-24 21:29:38.201455] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:30.383 [2024-07-24 21:29:38.201471] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:30.383 [2024-07-24 21:29:38.201505] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:30.383 [2024-07-24 21:29:38.201513] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:30.383 passed 00:04:30.383 Test: mem map registration ...[2024-07-24 21:29:38.238148] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:30.383 [2024-07-24 21:29:38.238163] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:30.383 passed 00:04:30.383 Test: mem map adjacent registrations ...passed 00:04:30.383 00:04:30.383 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.383 suites 1 1 n/a 0 0 00:04:30.383 tests 4 4 4 0 0 00:04:30.383 asserts 152 152 152 0 n/a 00:04:30.383 00:04:30.383 Elapsed time = 0.132 seconds 00:04:30.383 00:04:30.383 real 0m0.145s 00:04:30.383 user 0m0.134s 00:04:30.383 sys 0m0.011s 00:04:30.383 21:29:38 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.383 21:29:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:30.383 ************************************ 00:04:30.383 END TEST env_memory 00:04:30.383 ************************************ 00:04:30.383 21:29:38 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:30.383 21:29:38 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.383 21:29:38 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.383 21:29:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.383 ************************************ 00:04:30.383 START TEST env_vtophys 00:04:30.383 ************************************ 00:04:30.383 21:29:38 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:30.383 EAL: lib.eal log level changed from notice to debug 00:04:30.383 EAL: Detected lcore 0 as core 0 on socket 0 00:04:30.383 EAL: Detected lcore 1 as core 1 on socket 0 00:04:30.383 EAL: Detected lcore 2 as core 2 on socket 0 00:04:30.383 EAL: Detected lcore 3 as core 3 on socket 0 00:04:30.383 EAL: Detected lcore 4 as core 4 on socket 0 00:04:30.383 EAL: Detected lcore 5 as core 5 on socket 0 00:04:30.383 EAL: Detected lcore 6 as core 6 on socket 0 00:04:30.383 EAL: Detected lcore 7 as core 8 on socket 0 00:04:30.383 EAL: Detected lcore 8 as core 9 on socket 0 00:04:30.383 EAL: Detected lcore 9 as core 10 on socket 0 00:04:30.383 EAL: Detected lcore 10 as core 11 on socket 0 00:04:30.383 EAL: Detected lcore 11 as core 12 on socket 0 00:04:30.383 EAL: Detected lcore 12 as core 13 on socket 0 00:04:30.383 EAL: Detected lcore 13 as core 16 on socket 0 00:04:30.383 EAL: Detected lcore 14 as core 17 on socket 0 00:04:30.383 EAL: Detected lcore 15 as core 18 on socket 0 00:04:30.383 EAL: Detected lcore 16 as core 19 on socket 0 00:04:30.383 EAL: Detected lcore 17 as core 20 on socket 0 00:04:30.383 EAL: Detected lcore 18 as core 21 on socket 0 00:04:30.383 EAL: Detected lcore 19 as core 25 on socket 0 00:04:30.383 EAL: Detected lcore 20 as core 26 on socket 0 00:04:30.383 EAL: Detected lcore 21 as core 27 on socket 0 00:04:30.383 EAL: Detected lcore 22 as core 28 on socket 0 00:04:30.383 EAL: Detected lcore 23 as core 29 on socket 0 00:04:30.383 EAL: Detected lcore 24 as core 0 on socket 1 00:04:30.383 EAL: Detected lcore 25 as core 1 on socket 1 00:04:30.383 EAL: Detected lcore 26 as core 2 on socket 1 00:04:30.383 EAL: Detected lcore 27 as core 3 on socket 1 00:04:30.383 EAL: Detected lcore 28 as core 4 on socket 1 00:04:30.383 EAL: Detected lcore 29 as core 5 on socket 1 00:04:30.383 EAL: Detected lcore 30 as core 6 on socket 1 00:04:30.383 EAL: Detected lcore 31 as core 9 on socket 1 00:04:30.383 EAL: Detected lcore 32 as core 10 on socket 1 00:04:30.383 EAL: Detected lcore 33 as core 11 on socket 1 00:04:30.383 EAL: Detected lcore 34 as core 12 on socket 1 00:04:30.383 EAL: Detected lcore 35 as core 13 on socket 1 00:04:30.383 EAL: Detected lcore 36 as core 16 on socket 1 00:04:30.383 EAL: Detected lcore 37 as core 17 on socket 1 00:04:30.383 EAL: Detected lcore 38 as core 18 on socket 1 00:04:30.383 EAL: Detected lcore 39 as core 19 on socket 1 00:04:30.383 EAL: Detected lcore 40 as core 20 on socket 1 00:04:30.383 EAL: Detected lcore 41 as core 21 on socket 1 00:04:30.383 EAL: Detected lcore 42 as core 24 on socket 1 00:04:30.383 EAL: Detected lcore 43 as core 25 on socket 1 00:04:30.383 EAL: Detected lcore 44 as core 26 on socket 1 00:04:30.383 EAL: Detected lcore 45 as core 27 on socket 1 00:04:30.383 EAL: Detected lcore 46 as core 28 on socket 1 00:04:30.383 EAL: Detected lcore 47 as core 29 on socket 1 00:04:30.383 EAL: Detected lcore 48 as core 0 on socket 0 00:04:30.383 EAL: Detected lcore 49 as core 1 on socket 0 00:04:30.383 EAL: Detected lcore 50 as core 2 on socket 0 00:04:30.383 EAL: Detected lcore 51 as core 3 on socket 0 00:04:30.383 EAL: Detected lcore 52 as core 4 on socket 0 00:04:30.383 EAL: Detected lcore 53 as core 5 on socket 0 00:04:30.383 EAL: Detected lcore 54 as core 6 on socket 0 00:04:30.383 EAL: Detected lcore 55 as core 8 on socket 0 00:04:30.383 EAL: Detected lcore 56 as core 9 on socket 0 00:04:30.383 EAL: Detected lcore 57 as core 10 on socket 0 00:04:30.383 EAL: Detected lcore 58 as core 11 on socket 0 00:04:30.383 EAL: Detected lcore 59 as core 12 on socket 0 00:04:30.383 EAL: Detected lcore 60 as core 13 on socket 0 00:04:30.383 EAL: Detected lcore 61 as core 16 on socket 0 00:04:30.383 EAL: Detected lcore 62 as core 17 on socket 0 00:04:30.383 EAL: Detected lcore 63 as core 18 on socket 0 00:04:30.383 EAL: Detected lcore 64 as core 19 on socket 0 00:04:30.383 EAL: Detected lcore 65 as core 20 on socket 0 00:04:30.383 EAL: Detected lcore 66 as core 21 on socket 0 00:04:30.383 EAL: Detected lcore 67 as core 25 on socket 0 00:04:30.383 EAL: Detected lcore 68 as core 26 on socket 0 00:04:30.383 EAL: Detected lcore 69 as core 27 on socket 0 00:04:30.383 EAL: Detected lcore 70 as core 28 on socket 0 00:04:30.383 EAL: Detected lcore 71 as core 29 on socket 0 00:04:30.383 EAL: Detected lcore 72 as core 0 on socket 1 00:04:30.383 EAL: Detected lcore 73 as core 1 on socket 1 00:04:30.383 EAL: Detected lcore 74 as core 2 on socket 1 00:04:30.383 EAL: Detected lcore 75 as core 3 on socket 1 00:04:30.383 EAL: Detected lcore 76 as core 4 on socket 1 00:04:30.383 EAL: Detected lcore 77 as core 5 on socket 1 00:04:30.383 EAL: Detected lcore 78 as core 6 on socket 1 00:04:30.383 EAL: Detected lcore 79 as core 9 on socket 1 00:04:30.383 EAL: Detected lcore 80 as core 10 on socket 1 00:04:30.383 EAL: Detected lcore 81 as core 11 on socket 1 00:04:30.383 EAL: Detected lcore 82 as core 12 on socket 1 00:04:30.383 EAL: Detected lcore 83 as core 13 on socket 1 00:04:30.383 EAL: Detected lcore 84 as core 16 on socket 1 00:04:30.384 EAL: Detected lcore 85 as core 17 on socket 1 00:04:30.384 EAL: Detected lcore 86 as core 18 on socket 1 00:04:30.384 EAL: Detected lcore 87 as core 19 on socket 1 00:04:30.384 EAL: Detected lcore 88 as core 20 on socket 1 00:04:30.384 EAL: Detected lcore 89 as core 21 on socket 1 00:04:30.384 EAL: Detected lcore 90 as core 24 on socket 1 00:04:30.384 EAL: Detected lcore 91 as core 25 on socket 1 00:04:30.384 EAL: Detected lcore 92 as core 26 on socket 1 00:04:30.384 EAL: Detected lcore 93 as core 27 on socket 1 00:04:30.384 EAL: Detected lcore 94 as core 28 on socket 1 00:04:30.384 EAL: Detected lcore 95 as core 29 on socket 1 00:04:30.384 EAL: Maximum logical cores by configuration: 128 00:04:30.384 EAL: Detected CPU lcores: 96 00:04:30.384 EAL: Detected NUMA nodes: 2 00:04:30.384 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:30.384 EAL: Detected shared linkage of DPDK 00:04:30.384 EAL: No shared files mode enabled, IPC will be disabled 00:04:30.384 EAL: Bus pci wants IOVA as 'DC' 00:04:30.384 EAL: Buses did not request a specific IOVA mode. 00:04:30.384 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:30.384 EAL: Selected IOVA mode 'VA' 00:04:30.384 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.384 EAL: Probing VFIO support... 00:04:30.384 EAL: IOMMU type 1 (Type 1) is supported 00:04:30.384 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:30.384 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:30.384 EAL: VFIO support initialized 00:04:30.384 EAL: Ask a virtual area of 0x2e000 bytes 00:04:30.384 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:30.384 EAL: Setting up physically contiguous memory... 00:04:30.384 EAL: Setting maximum number of open files to 524288 00:04:30.384 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:30.384 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:30.384 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.384 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:30.384 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.384 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:30.384 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.384 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:30.384 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.384 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:30.384 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.384 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:30.384 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.384 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:30.384 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.384 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:30.384 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.384 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:30.384 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:30.384 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.384 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:30.384 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.384 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:30.384 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.384 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:30.384 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.384 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:30.384 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.384 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:30.384 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.384 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:30.384 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.384 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:30.384 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.384 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:30.384 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:30.384 EAL: Hugepages will be freed exactly as allocated. 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: TSC frequency is ~2300000 KHz 00:04:30.384 EAL: Main lcore 0 is ready (tid=7fbd98227a00;cpuset=[0]) 00:04:30.384 EAL: Trying to obtain current memory policy. 00:04:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.384 EAL: Restoring previous memory policy: 0 00:04:30.384 EAL: request: mp_malloc_sync 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.384 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.384 00:04:30.384 00:04:30.384 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.384 http://cunit.sourceforge.net/ 00:04:30.384 00:04:30.384 00:04:30.384 Suite: components_suite 00:04:30.384 Test: vtophys_malloc_test ...passed 00:04:30.384 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.384 EAL: Restoring previous memory policy: 4 00:04:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.384 EAL: request: mp_malloc_sync 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: Heap on socket 0 was expanded by 4MB 00:04:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.384 EAL: request: mp_malloc_sync 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: Heap on socket 0 was shrunk by 4MB 00:04:30.384 EAL: Trying to obtain current memory policy. 00:04:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.384 EAL: Restoring previous memory policy: 4 00:04:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.384 EAL: request: mp_malloc_sync 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: Heap on socket 0 was expanded by 6MB 00:04:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.384 EAL: request: mp_malloc_sync 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: Heap on socket 0 was shrunk by 6MB 00:04:30.384 EAL: Trying to obtain current memory policy. 00:04:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.384 EAL: Restoring previous memory policy: 4 00:04:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.384 EAL: request: mp_malloc_sync 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: Heap on socket 0 was expanded by 10MB 00:04:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.384 EAL: request: mp_malloc_sync 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: Heap on socket 0 was shrunk by 10MB 00:04:30.384 EAL: Trying to obtain current memory policy. 00:04:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.384 EAL: Restoring previous memory policy: 4 00:04:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.384 EAL: request: mp_malloc_sync 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: Heap on socket 0 was expanded by 18MB 00:04:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.384 EAL: request: mp_malloc_sync 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: Heap on socket 0 was shrunk by 18MB 00:04:30.384 EAL: Trying to obtain current memory policy. 00:04:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.384 EAL: Restoring previous memory policy: 4 00:04:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.384 EAL: request: mp_malloc_sync 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: Heap on socket 0 was expanded by 34MB 00:04:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.384 EAL: request: mp_malloc_sync 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: Heap on socket 0 was shrunk by 34MB 00:04:30.384 EAL: Trying to obtain current memory policy. 00:04:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.384 EAL: Restoring previous memory policy: 4 00:04:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.384 EAL: request: mp_malloc_sync 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: Heap on socket 0 was expanded by 66MB 00:04:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.384 EAL: request: mp_malloc_sync 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: Heap on socket 0 was shrunk by 66MB 00:04:30.384 EAL: Trying to obtain current memory policy. 00:04:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.384 EAL: Restoring previous memory policy: 4 00:04:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.384 EAL: request: mp_malloc_sync 00:04:30.384 EAL: No shared files mode enabled, IPC is disabled 00:04:30.384 EAL: Heap on socket 0 was expanded by 130MB 00:04:30.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.643 EAL: request: mp_malloc_sync 00:04:30.643 EAL: No shared files mode enabled, IPC is disabled 00:04:30.643 EAL: Heap on socket 0 was shrunk by 130MB 00:04:30.643 EAL: Trying to obtain current memory policy. 00:04:30.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.643 EAL: Restoring previous memory policy: 4 00:04:30.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.643 EAL: request: mp_malloc_sync 00:04:30.643 EAL: No shared files mode enabled, IPC is disabled 00:04:30.643 EAL: Heap on socket 0 was expanded by 258MB 00:04:30.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.643 EAL: request: mp_malloc_sync 00:04:30.643 EAL: No shared files mode enabled, IPC is disabled 00:04:30.643 EAL: Heap on socket 0 was shrunk by 258MB 00:04:30.643 EAL: Trying to obtain current memory policy. 00:04:30.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.643 EAL: Restoring previous memory policy: 4 00:04:30.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.643 EAL: request: mp_malloc_sync 00:04:30.643 EAL: No shared files mode enabled, IPC is disabled 00:04:30.643 EAL: Heap on socket 0 was expanded by 514MB 00:04:30.902 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.902 EAL: request: mp_malloc_sync 00:04:30.902 EAL: No shared files mode enabled, IPC is disabled 00:04:30.902 EAL: Heap on socket 0 was shrunk by 514MB 00:04:30.902 EAL: Trying to obtain current memory policy. 00:04:30.902 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.161 EAL: Restoring previous memory policy: 4 00:04:31.161 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.161 EAL: request: mp_malloc_sync 00:04:31.162 EAL: No shared files mode enabled, IPC is disabled 00:04:31.162 EAL: Heap on socket 0 was expanded by 1026MB 00:04:31.162 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.420 EAL: request: mp_malloc_sync 00:04:31.420 EAL: No shared files mode enabled, IPC is disabled 00:04:31.420 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.420 passed 00:04:31.420 00:04:31.420 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.420 suites 1 1 n/a 0 0 00:04:31.420 tests 2 2 2 0 0 00:04:31.420 asserts 497 497 497 0 n/a 00:04:31.420 00:04:31.420 Elapsed time = 0.961 seconds 00:04:31.420 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.420 EAL: request: mp_malloc_sync 00:04:31.420 EAL: No shared files mode enabled, IPC is disabled 00:04:31.420 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.420 EAL: No shared files mode enabled, IPC is disabled 00:04:31.420 EAL: No shared files mode enabled, IPC is disabled 00:04:31.420 EAL: No shared files mode enabled, IPC is disabled 00:04:31.420 00:04:31.420 real 0m1.065s 00:04:31.420 user 0m0.635s 00:04:31.420 sys 0m0.406s 00:04:31.420 21:29:39 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.420 21:29:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:31.420 ************************************ 00:04:31.420 END TEST env_vtophys 00:04:31.420 ************************************ 00:04:31.420 21:29:39 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:31.420 21:29:39 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.420 21:29:39 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.420 21:29:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.420 ************************************ 00:04:31.420 START TEST env_pci 00:04:31.420 ************************************ 00:04:31.420 21:29:39 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:31.420 00:04:31.420 00:04:31.420 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.420 http://cunit.sourceforge.net/ 00:04:31.420 00:04:31.420 00:04:31.420 Suite: pci 00:04:31.420 Test: pci_hook ...[2024-07-24 21:29:39.503416] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2871383 has claimed it 00:04:31.420 EAL: Cannot find device (10000:00:01.0) 00:04:31.420 EAL: Failed to attach device on primary process 00:04:31.420 passed 00:04:31.420 00:04:31.420 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.420 suites 1 1 n/a 0 0 00:04:31.420 tests 1 1 1 0 0 00:04:31.420 asserts 25 25 25 0 n/a 00:04:31.420 00:04:31.420 Elapsed time = 0.026 seconds 00:04:31.420 00:04:31.420 real 0m0.045s 00:04:31.420 user 0m0.017s 00:04:31.420 sys 0m0.028s 00:04:31.420 21:29:39 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.420 21:29:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:31.420 ************************************ 00:04:31.420 END TEST env_pci 00:04:31.420 ************************************ 00:04:31.679 21:29:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.679 21:29:39 env -- env/env.sh@15 -- # uname 00:04:31.679 21:29:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.679 21:29:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.679 21:29:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.679 21:29:39 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:31.679 21:29:39 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.679 21:29:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.679 ************************************ 00:04:31.679 START TEST env_dpdk_post_init 00:04:31.679 ************************************ 00:04:31.679 21:29:39 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.679 EAL: Detected CPU lcores: 96 00:04:31.679 EAL: Detected NUMA nodes: 2 00:04:31.679 EAL: Detected shared linkage of DPDK 00:04:31.679 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.679 EAL: Selected IOVA mode 'VA' 00:04:31.679 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.679 EAL: VFIO support initialized 00:04:31.679 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.679 EAL: Using IOMMU type 1 (Type 1) 00:04:31.679 EAL: Ignore mapping IO port bar(1) 00:04:31.679 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:31.679 EAL: Ignore mapping IO port bar(1) 00:04:31.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:31.680 EAL: Ignore mapping IO port bar(1) 00:04:31.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:31.680 EAL: Ignore mapping IO port bar(1) 00:04:31.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:31.680 EAL: Ignore mapping IO port bar(1) 00:04:31.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:31.680 EAL: Ignore mapping IO port bar(1) 00:04:31.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:31.680 EAL: Ignore mapping IO port bar(1) 00:04:31.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:31.940 EAL: Ignore mapping IO port bar(1) 00:04:31.940 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:32.509 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:32.509 EAL: Ignore mapping IO port bar(1) 00:04:32.509 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:32.509 EAL: Ignore mapping IO port bar(1) 00:04:32.509 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:32.509 EAL: Ignore mapping IO port bar(1) 00:04:32.509 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:32.509 EAL: Ignore mapping IO port bar(1) 00:04:32.509 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:32.509 EAL: Ignore mapping IO port bar(1) 00:04:32.509 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:32.509 EAL: Ignore mapping IO port bar(1) 00:04:32.509 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:32.509 EAL: Ignore mapping IO port bar(1) 00:04:32.509 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:32.767 EAL: Ignore mapping IO port bar(1) 00:04:32.767 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:36.051 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:36.051 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:36.051 Starting DPDK initialization... 00:04:36.051 Starting SPDK post initialization... 00:04:36.051 SPDK NVMe probe 00:04:36.051 Attaching to 0000:5e:00.0 00:04:36.051 Attached to 0000:5e:00.0 00:04:36.051 Cleaning up... 00:04:36.051 00:04:36.051 real 0m4.308s 00:04:36.051 user 0m3.263s 00:04:36.051 sys 0m0.117s 00:04:36.051 21:29:43 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.051 21:29:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.052 ************************************ 00:04:36.052 END TEST env_dpdk_post_init 00:04:36.052 ************************************ 00:04:36.052 21:29:43 env -- env/env.sh@26 -- # uname 00:04:36.052 21:29:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:36.052 21:29:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.052 21:29:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.052 21:29:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.052 21:29:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.052 ************************************ 00:04:36.052 START TEST env_mem_callbacks 00:04:36.052 ************************************ 00:04:36.052 21:29:43 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.052 EAL: Detected CPU lcores: 96 00:04:36.052 EAL: Detected NUMA nodes: 2 00:04:36.052 EAL: Detected shared linkage of DPDK 00:04:36.052 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.052 EAL: Selected IOVA mode 'VA' 00:04:36.052 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.052 EAL: VFIO support initialized 00:04:36.052 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.052 00:04:36.052 00:04:36.052 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.052 http://cunit.sourceforge.net/ 00:04:36.052 00:04:36.052 00:04:36.052 Suite: memory 00:04:36.052 Test: test ... 00:04:36.052 register 0x200000200000 2097152 00:04:36.052 malloc 3145728 00:04:36.052 register 0x200000400000 4194304 00:04:36.052 buf 0x200000500000 len 3145728 PASSED 00:04:36.052 malloc 64 00:04:36.052 buf 0x2000004fff40 len 64 PASSED 00:04:36.052 malloc 4194304 00:04:36.052 register 0x200000800000 6291456 00:04:36.052 buf 0x200000a00000 len 4194304 PASSED 00:04:36.052 free 0x200000500000 3145728 00:04:36.052 free 0x2000004fff40 64 00:04:36.052 unregister 0x200000400000 4194304 PASSED 00:04:36.052 free 0x200000a00000 4194304 00:04:36.052 unregister 0x200000800000 6291456 PASSED 00:04:36.052 malloc 8388608 00:04:36.052 register 0x200000400000 10485760 00:04:36.052 buf 0x200000600000 len 8388608 PASSED 00:04:36.052 free 0x200000600000 8388608 00:04:36.052 unregister 0x200000400000 10485760 PASSED 00:04:36.052 passed 00:04:36.052 00:04:36.052 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.052 suites 1 1 n/a 0 0 00:04:36.052 tests 1 1 1 0 0 00:04:36.052 asserts 15 15 15 0 n/a 00:04:36.052 00:04:36.052 Elapsed time = 0.005 seconds 00:04:36.052 00:04:36.052 real 0m0.056s 00:04:36.052 user 0m0.018s 00:04:36.052 sys 0m0.038s 00:04:36.052 21:29:44 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.052 21:29:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:36.052 ************************************ 00:04:36.052 END TEST env_mem_callbacks 00:04:36.052 ************************************ 00:04:36.052 00:04:36.052 real 0m6.056s 00:04:36.052 user 0m4.239s 00:04:36.052 sys 0m0.894s 00:04:36.052 21:29:44 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.052 21:29:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.052 ************************************ 00:04:36.052 END TEST env 00:04:36.052 ************************************ 00:04:36.052 21:29:44 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.052 21:29:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.052 21:29:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.052 21:29:44 -- common/autotest_common.sh@10 -- # set +x 00:04:36.052 ************************************ 00:04:36.052 START TEST rpc 00:04:36.052 ************************************ 00:04:36.052 21:29:44 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.310 * Looking for test storage... 00:04:36.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.310 21:29:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2872271 00:04:36.310 21:29:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.310 21:29:44 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:36.310 21:29:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2872271 00:04:36.310 21:29:44 rpc -- common/autotest_common.sh@829 -- # '[' -z 2872271 ']' 00:04:36.310 21:29:44 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.310 21:29:44 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.310 21:29:44 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.310 21:29:44 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.310 21:29:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.310 [2024-07-24 21:29:44.274027] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:04:36.310 [2024-07-24 21:29:44.274082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872271 ] 00:04:36.310 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.310 [2024-07-24 21:29:44.329471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.310 [2024-07-24 21:29:44.411396] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:36.310 [2024-07-24 21:29:44.411429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2872271' to capture a snapshot of events at runtime. 00:04:36.310 [2024-07-24 21:29:44.411436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:36.310 [2024-07-24 21:29:44.411442] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:36.310 [2024-07-24 21:29:44.411448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2872271 for offline analysis/debug. 00:04:36.310 [2024-07-24 21:29:44.411486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.283 21:29:45 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.283 21:29:45 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:37.283 21:29:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.283 21:29:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.283 21:29:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:37.283 21:29:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:37.283 21:29:45 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.283 21:29:45 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.283 21:29:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.283 ************************************ 00:04:37.283 START TEST rpc_integrity 00:04:37.283 ************************************ 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:37.283 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.283 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.283 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.283 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.283 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.283 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.283 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.283 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.283 { 00:04:37.283 "name": "Malloc0", 00:04:37.283 "aliases": [ 00:04:37.283 "09526f13-c303-4eac-bc78-cb0145e5180c" 00:04:37.283 ], 00:04:37.283 "product_name": "Malloc disk", 00:04:37.283 "block_size": 512, 00:04:37.283 "num_blocks": 16384, 00:04:37.283 "uuid": "09526f13-c303-4eac-bc78-cb0145e5180c", 00:04:37.283 "assigned_rate_limits": { 00:04:37.283 "rw_ios_per_sec": 0, 00:04:37.283 "rw_mbytes_per_sec": 0, 00:04:37.283 "r_mbytes_per_sec": 0, 00:04:37.283 "w_mbytes_per_sec": 0 00:04:37.283 }, 00:04:37.283 "claimed": false, 00:04:37.283 "zoned": false, 00:04:37.283 "supported_io_types": { 00:04:37.283 "read": true, 00:04:37.283 "write": true, 00:04:37.283 "unmap": true, 00:04:37.283 "flush": true, 00:04:37.283 "reset": true, 00:04:37.283 "nvme_admin": false, 00:04:37.283 "nvme_io": false, 00:04:37.283 "nvme_io_md": false, 00:04:37.283 "write_zeroes": true, 00:04:37.283 "zcopy": true, 00:04:37.283 "get_zone_info": false, 00:04:37.283 "zone_management": false, 00:04:37.283 "zone_append": false, 00:04:37.283 "compare": false, 00:04:37.283 "compare_and_write": false, 00:04:37.283 "abort": true, 00:04:37.283 "seek_hole": false, 00:04:37.283 "seek_data": false, 00:04:37.283 "copy": true, 00:04:37.283 "nvme_iov_md": false 00:04:37.283 }, 00:04:37.283 "memory_domains": [ 00:04:37.283 { 00:04:37.283 "dma_device_id": "system", 00:04:37.283 "dma_device_type": 1 00:04:37.283 }, 00:04:37.283 { 00:04:37.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.283 "dma_device_type": 2 00:04:37.283 } 00:04:37.283 ], 00:04:37.283 "driver_specific": {} 00:04:37.283 } 00:04:37.283 ]' 00:04:37.283 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:37.283 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.283 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.283 [2024-07-24 21:29:45.226778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:37.283 [2024-07-24 21:29:45.226806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.283 [2024-07-24 21:29:45.226817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf782d0 00:04:37.283 [2024-07-24 21:29:45.226823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.283 [2024-07-24 21:29:45.227897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.283 [2024-07-24 21:29:45.227918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.283 Passthru0 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.283 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.283 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.283 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.283 { 00:04:37.283 "name": "Malloc0", 00:04:37.283 "aliases": [ 00:04:37.283 "09526f13-c303-4eac-bc78-cb0145e5180c" 00:04:37.283 ], 00:04:37.283 "product_name": "Malloc disk", 00:04:37.283 "block_size": 512, 00:04:37.283 "num_blocks": 16384, 00:04:37.283 "uuid": "09526f13-c303-4eac-bc78-cb0145e5180c", 00:04:37.283 "assigned_rate_limits": { 00:04:37.283 "rw_ios_per_sec": 0, 00:04:37.283 "rw_mbytes_per_sec": 0, 00:04:37.283 "r_mbytes_per_sec": 0, 00:04:37.283 "w_mbytes_per_sec": 0 00:04:37.283 }, 00:04:37.283 "claimed": true, 00:04:37.283 "claim_type": "exclusive_write", 00:04:37.283 "zoned": false, 00:04:37.283 "supported_io_types": { 00:04:37.283 "read": true, 00:04:37.283 "write": true, 00:04:37.283 "unmap": true, 00:04:37.283 "flush": true, 00:04:37.283 "reset": true, 00:04:37.283 "nvme_admin": false, 00:04:37.283 "nvme_io": false, 00:04:37.283 "nvme_io_md": false, 00:04:37.283 "write_zeroes": true, 00:04:37.283 "zcopy": true, 00:04:37.283 "get_zone_info": false, 00:04:37.283 "zone_management": false, 00:04:37.283 "zone_append": false, 00:04:37.283 "compare": false, 00:04:37.283 "compare_and_write": false, 00:04:37.283 "abort": true, 00:04:37.283 "seek_hole": false, 00:04:37.283 "seek_data": false, 00:04:37.283 "copy": true, 00:04:37.283 "nvme_iov_md": false 00:04:37.283 }, 00:04:37.283 "memory_domains": [ 00:04:37.283 { 00:04:37.283 "dma_device_id": "system", 00:04:37.283 "dma_device_type": 1 00:04:37.283 }, 00:04:37.283 { 00:04:37.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.283 "dma_device_type": 2 00:04:37.283 } 00:04:37.283 ], 00:04:37.283 "driver_specific": {} 00:04:37.283 }, 00:04:37.283 { 00:04:37.283 "name": "Passthru0", 00:04:37.283 "aliases": [ 00:04:37.283 "b2e680ec-6dd7-5223-b744-7f2aaa49239d" 00:04:37.283 ], 00:04:37.283 "product_name": "passthru", 00:04:37.283 "block_size": 512, 00:04:37.283 "num_blocks": 16384, 00:04:37.283 "uuid": "b2e680ec-6dd7-5223-b744-7f2aaa49239d", 00:04:37.283 "assigned_rate_limits": { 00:04:37.283 "rw_ios_per_sec": 0, 00:04:37.283 "rw_mbytes_per_sec": 0, 00:04:37.283 "r_mbytes_per_sec": 0, 00:04:37.283 "w_mbytes_per_sec": 0 00:04:37.283 }, 00:04:37.283 "claimed": false, 00:04:37.283 "zoned": false, 00:04:37.283 "supported_io_types": { 00:04:37.283 "read": true, 00:04:37.283 "write": true, 00:04:37.283 "unmap": true, 00:04:37.283 "flush": true, 00:04:37.283 "reset": true, 00:04:37.283 "nvme_admin": false, 00:04:37.283 "nvme_io": false, 00:04:37.283 "nvme_io_md": false, 00:04:37.283 "write_zeroes": true, 00:04:37.283 "zcopy": true, 00:04:37.283 "get_zone_info": false, 00:04:37.283 "zone_management": false, 00:04:37.283 "zone_append": false, 00:04:37.283 "compare": false, 00:04:37.283 "compare_and_write": false, 00:04:37.283 "abort": true, 00:04:37.283 "seek_hole": false, 00:04:37.283 "seek_data": false, 00:04:37.283 "copy": true, 00:04:37.283 "nvme_iov_md": false 00:04:37.283 }, 00:04:37.283 "memory_domains": [ 00:04:37.283 { 00:04:37.283 "dma_device_id": "system", 00:04:37.283 "dma_device_type": 1 00:04:37.283 }, 00:04:37.283 { 00:04:37.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.283 "dma_device_type": 2 00:04:37.283 } 00:04:37.283 ], 00:04:37.283 "driver_specific": { 00:04:37.283 "passthru": { 00:04:37.284 "name": "Passthru0", 00:04:37.284 "base_bdev_name": "Malloc0" 00:04:37.284 } 00:04:37.284 } 00:04:37.284 } 00:04:37.284 ]' 00:04:37.284 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.284 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.284 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.284 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.284 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.284 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.284 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.284 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.284 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.284 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.284 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.284 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.284 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.284 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.284 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.284 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:37.284 21:29:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.284 00:04:37.284 real 0m0.279s 00:04:37.284 user 0m0.173s 00:04:37.284 sys 0m0.037s 00:04:37.284 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.284 21:29:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.284 ************************************ 00:04:37.284 END TEST rpc_integrity 00:04:37.284 ************************************ 00:04:37.543 21:29:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:37.543 21:29:45 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.543 21:29:45 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.543 21:29:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.543 ************************************ 00:04:37.543 START TEST rpc_plugins 00:04:37.543 ************************************ 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:37.543 21:29:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.543 21:29:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:37.543 21:29:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.543 21:29:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:37.543 { 00:04:37.543 "name": "Malloc1", 00:04:37.543 "aliases": [ 00:04:37.543 "4f648fa9-7c9a-4937-b947-970565cc0aee" 00:04:37.543 ], 00:04:37.543 "product_name": "Malloc disk", 00:04:37.543 "block_size": 4096, 00:04:37.543 "num_blocks": 256, 00:04:37.543 "uuid": "4f648fa9-7c9a-4937-b947-970565cc0aee", 00:04:37.543 "assigned_rate_limits": { 00:04:37.543 "rw_ios_per_sec": 0, 00:04:37.543 "rw_mbytes_per_sec": 0, 00:04:37.543 "r_mbytes_per_sec": 0, 00:04:37.543 "w_mbytes_per_sec": 0 00:04:37.543 }, 00:04:37.543 "claimed": false, 00:04:37.543 "zoned": false, 00:04:37.543 "supported_io_types": { 00:04:37.543 "read": true, 00:04:37.543 "write": true, 00:04:37.543 "unmap": true, 00:04:37.543 "flush": true, 00:04:37.543 "reset": true, 00:04:37.543 "nvme_admin": false, 00:04:37.543 "nvme_io": false, 00:04:37.543 "nvme_io_md": false, 00:04:37.543 "write_zeroes": true, 00:04:37.543 "zcopy": true, 00:04:37.543 "get_zone_info": false, 00:04:37.543 "zone_management": false, 00:04:37.543 "zone_append": false, 00:04:37.543 "compare": false, 00:04:37.543 "compare_and_write": false, 00:04:37.543 "abort": true, 00:04:37.543 "seek_hole": false, 00:04:37.543 "seek_data": false, 00:04:37.543 "copy": true, 00:04:37.543 "nvme_iov_md": false 00:04:37.543 }, 00:04:37.543 "memory_domains": [ 00:04:37.543 { 00:04:37.543 "dma_device_id": "system", 00:04:37.543 "dma_device_type": 1 00:04:37.543 }, 00:04:37.543 { 00:04:37.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.543 "dma_device_type": 2 00:04:37.543 } 00:04:37.543 ], 00:04:37.543 "driver_specific": {} 00:04:37.543 } 00:04:37.543 ]' 00:04:37.543 21:29:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:37.543 21:29:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:37.543 21:29:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.543 21:29:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.543 21:29:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:37.543 21:29:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:37.543 21:29:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:37.543 00:04:37.543 real 0m0.141s 00:04:37.543 user 0m0.089s 00:04:37.543 sys 0m0.017s 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.543 21:29:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.543 ************************************ 00:04:37.543 END TEST rpc_plugins 00:04:37.543 ************************************ 00:04:37.543 21:29:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:37.543 21:29:45 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.543 21:29:45 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.543 21:29:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.543 ************************************ 00:04:37.544 START TEST rpc_trace_cmd_test 00:04:37.544 ************************************ 00:04:37.544 21:29:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:37.544 21:29:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:37.544 21:29:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:37.544 21:29:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.544 21:29:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:37.802 21:29:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.802 21:29:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:37.802 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2872271", 00:04:37.802 "tpoint_group_mask": "0x8", 00:04:37.802 "iscsi_conn": { 00:04:37.802 "mask": "0x2", 00:04:37.802 "tpoint_mask": "0x0" 00:04:37.802 }, 00:04:37.802 "scsi": { 00:04:37.802 "mask": "0x4", 00:04:37.802 "tpoint_mask": "0x0" 00:04:37.802 }, 00:04:37.802 "bdev": { 00:04:37.802 "mask": "0x8", 00:04:37.802 "tpoint_mask": "0xffffffffffffffff" 00:04:37.802 }, 00:04:37.802 "nvmf_rdma": { 00:04:37.802 "mask": "0x10", 00:04:37.802 "tpoint_mask": "0x0" 00:04:37.802 }, 00:04:37.802 "nvmf_tcp": { 00:04:37.802 "mask": "0x20", 00:04:37.802 "tpoint_mask": "0x0" 00:04:37.802 }, 00:04:37.802 "ftl": { 00:04:37.802 "mask": "0x40", 00:04:37.802 "tpoint_mask": "0x0" 00:04:37.802 }, 00:04:37.803 "blobfs": { 00:04:37.803 "mask": "0x80", 00:04:37.803 "tpoint_mask": "0x0" 00:04:37.803 }, 00:04:37.803 "dsa": { 00:04:37.803 "mask": "0x200", 00:04:37.803 "tpoint_mask": "0x0" 00:04:37.803 }, 00:04:37.803 "thread": { 00:04:37.803 "mask": "0x400", 00:04:37.803 "tpoint_mask": "0x0" 00:04:37.803 }, 00:04:37.803 "nvme_pcie": { 00:04:37.803 "mask": "0x800", 00:04:37.803 "tpoint_mask": "0x0" 00:04:37.803 }, 00:04:37.803 "iaa": { 00:04:37.803 "mask": "0x1000", 00:04:37.803 "tpoint_mask": "0x0" 00:04:37.803 }, 00:04:37.803 "nvme_tcp": { 00:04:37.803 "mask": "0x2000", 00:04:37.803 "tpoint_mask": "0x0" 00:04:37.803 }, 00:04:37.803 "bdev_nvme": { 00:04:37.803 "mask": "0x4000", 00:04:37.803 "tpoint_mask": "0x0" 00:04:37.803 }, 00:04:37.803 "sock": { 00:04:37.803 "mask": "0x8000", 00:04:37.803 "tpoint_mask": "0x0" 00:04:37.803 } 00:04:37.803 }' 00:04:37.803 21:29:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:37.803 21:29:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:37.803 21:29:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:37.803 21:29:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:37.803 21:29:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:37.803 21:29:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:37.803 21:29:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:37.803 21:29:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:37.803 21:29:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:37.803 21:29:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:37.803 00:04:37.803 real 0m0.221s 00:04:37.803 user 0m0.183s 00:04:37.803 sys 0m0.029s 00:04:37.803 21:29:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.803 21:29:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:37.803 ************************************ 00:04:37.803 END TEST rpc_trace_cmd_test 00:04:37.803 ************************************ 00:04:37.803 21:29:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:37.803 21:29:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:37.803 21:29:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:37.803 21:29:45 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.803 21:29:45 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.803 21:29:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.062 ************************************ 00:04:38.062 START TEST rpc_daemon_integrity 00:04:38.062 ************************************ 00:04:38.062 21:29:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:38.062 21:29:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.062 21:29:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.062 21:29:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.062 21:29:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.062 21:29:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.062 21:29:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:38.062 21:29:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.062 21:29:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.062 21:29:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.062 21:29:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.062 { 00:04:38.062 "name": "Malloc2", 00:04:38.062 "aliases": [ 00:04:38.062 "667ddf6e-e813-4fb5-8f74-057f5e9b3f79" 00:04:38.062 ], 00:04:38.062 "product_name": "Malloc disk", 00:04:38.062 "block_size": 512, 00:04:38.062 "num_blocks": 16384, 00:04:38.062 "uuid": "667ddf6e-e813-4fb5-8f74-057f5e9b3f79", 00:04:38.062 "assigned_rate_limits": { 00:04:38.062 "rw_ios_per_sec": 0, 00:04:38.062 "rw_mbytes_per_sec": 0, 00:04:38.062 "r_mbytes_per_sec": 0, 00:04:38.062 "w_mbytes_per_sec": 0 00:04:38.062 }, 00:04:38.062 "claimed": false, 00:04:38.062 "zoned": false, 00:04:38.062 "supported_io_types": { 00:04:38.062 "read": true, 00:04:38.062 "write": true, 00:04:38.062 "unmap": true, 00:04:38.062 "flush": true, 00:04:38.062 "reset": true, 00:04:38.062 "nvme_admin": false, 00:04:38.062 "nvme_io": false, 00:04:38.062 "nvme_io_md": false, 00:04:38.062 "write_zeroes": true, 00:04:38.062 "zcopy": true, 00:04:38.062 "get_zone_info": false, 00:04:38.062 "zone_management": false, 00:04:38.062 "zone_append": false, 00:04:38.062 "compare": false, 00:04:38.062 "compare_and_write": false, 00:04:38.062 "abort": true, 00:04:38.062 "seek_hole": false, 00:04:38.062 "seek_data": false, 00:04:38.062 "copy": true, 00:04:38.062 "nvme_iov_md": false 00:04:38.062 }, 00:04:38.062 "memory_domains": [ 00:04:38.062 { 00:04:38.062 "dma_device_id": "system", 00:04:38.062 "dma_device_type": 1 00:04:38.062 }, 00:04:38.062 { 00:04:38.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.062 "dma_device_type": 2 00:04:38.062 } 00:04:38.062 ], 00:04:38.062 "driver_specific": {} 00:04:38.062 } 00:04:38.062 ]' 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.062 [2024-07-24 21:29:46.065069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:38.062 [2024-07-24 21:29:46.065096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.062 [2024-07-24 21:29:46.065107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x110fac0 00:04:38.062 [2024-07-24 21:29:46.065114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.062 [2024-07-24 21:29:46.066060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.062 [2024-07-24 21:29:46.066081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.062 Passthru0 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.062 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.062 { 00:04:38.062 "name": "Malloc2", 00:04:38.062 "aliases": [ 00:04:38.062 "667ddf6e-e813-4fb5-8f74-057f5e9b3f79" 00:04:38.062 ], 00:04:38.062 "product_name": "Malloc disk", 00:04:38.062 "block_size": 512, 00:04:38.062 "num_blocks": 16384, 00:04:38.062 "uuid": "667ddf6e-e813-4fb5-8f74-057f5e9b3f79", 00:04:38.062 "assigned_rate_limits": { 00:04:38.062 "rw_ios_per_sec": 0, 00:04:38.062 "rw_mbytes_per_sec": 0, 00:04:38.062 "r_mbytes_per_sec": 0, 00:04:38.062 "w_mbytes_per_sec": 0 00:04:38.062 }, 00:04:38.062 "claimed": true, 00:04:38.062 "claim_type": "exclusive_write", 00:04:38.062 "zoned": false, 00:04:38.062 "supported_io_types": { 00:04:38.062 "read": true, 00:04:38.062 "write": true, 00:04:38.062 "unmap": true, 00:04:38.062 "flush": true, 00:04:38.062 "reset": true, 00:04:38.062 "nvme_admin": false, 00:04:38.062 "nvme_io": false, 00:04:38.062 "nvme_io_md": false, 00:04:38.062 "write_zeroes": true, 00:04:38.062 "zcopy": true, 00:04:38.062 "get_zone_info": false, 00:04:38.062 "zone_management": false, 00:04:38.062 "zone_append": false, 00:04:38.062 "compare": false, 00:04:38.062 "compare_and_write": false, 00:04:38.062 "abort": true, 00:04:38.062 "seek_hole": false, 00:04:38.062 "seek_data": false, 00:04:38.062 "copy": true, 00:04:38.062 "nvme_iov_md": false 00:04:38.062 }, 00:04:38.062 "memory_domains": [ 00:04:38.062 { 00:04:38.062 "dma_device_id": "system", 00:04:38.062 "dma_device_type": 1 00:04:38.062 }, 00:04:38.062 { 00:04:38.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.062 "dma_device_type": 2 00:04:38.062 } 00:04:38.062 ], 00:04:38.062 "driver_specific": {} 00:04:38.062 }, 00:04:38.062 { 00:04:38.062 "name": "Passthru0", 00:04:38.062 "aliases": [ 00:04:38.062 "9a9b1d5b-a4a4-5e3b-a542-fd867e4246b5" 00:04:38.062 ], 00:04:38.062 "product_name": "passthru", 00:04:38.062 "block_size": 512, 00:04:38.062 "num_blocks": 16384, 00:04:38.062 "uuid": "9a9b1d5b-a4a4-5e3b-a542-fd867e4246b5", 00:04:38.062 "assigned_rate_limits": { 00:04:38.062 "rw_ios_per_sec": 0, 00:04:38.062 "rw_mbytes_per_sec": 0, 00:04:38.062 "r_mbytes_per_sec": 0, 00:04:38.062 "w_mbytes_per_sec": 0 00:04:38.062 }, 00:04:38.062 "claimed": false, 00:04:38.062 "zoned": false, 00:04:38.062 "supported_io_types": { 00:04:38.062 "read": true, 00:04:38.062 "write": true, 00:04:38.062 "unmap": true, 00:04:38.062 "flush": true, 00:04:38.062 "reset": true, 00:04:38.062 "nvme_admin": false, 00:04:38.062 "nvme_io": false, 00:04:38.062 "nvme_io_md": false, 00:04:38.062 "write_zeroes": true, 00:04:38.062 "zcopy": true, 00:04:38.062 "get_zone_info": false, 00:04:38.062 "zone_management": false, 00:04:38.063 "zone_append": false, 00:04:38.063 "compare": false, 00:04:38.063 "compare_and_write": false, 00:04:38.063 "abort": true, 00:04:38.063 "seek_hole": false, 00:04:38.063 "seek_data": false, 00:04:38.063 "copy": true, 00:04:38.063 "nvme_iov_md": false 00:04:38.063 }, 00:04:38.063 "memory_domains": [ 00:04:38.063 { 00:04:38.063 "dma_device_id": "system", 00:04:38.063 "dma_device_type": 1 00:04:38.063 }, 00:04:38.063 { 00:04:38.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.063 "dma_device_type": 2 00:04:38.063 } 00:04:38.063 ], 00:04:38.063 "driver_specific": { 00:04:38.063 "passthru": { 00:04:38.063 "name": "Passthru0", 00:04:38.063 "base_bdev_name": "Malloc2" 00:04:38.063 } 00:04:38.063 } 00:04:38.063 } 00:04:38.063 ]' 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.063 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.321 21:29:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.321 00:04:38.321 real 0m0.276s 00:04:38.321 user 0m0.178s 00:04:38.321 sys 0m0.037s 00:04:38.321 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.321 21:29:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.321 ************************************ 00:04:38.321 END TEST rpc_daemon_integrity 00:04:38.321 ************************************ 00:04:38.321 21:29:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:38.321 21:29:46 rpc -- rpc/rpc.sh@84 -- # killprocess 2872271 00:04:38.321 21:29:46 rpc -- common/autotest_common.sh@948 -- # '[' -z 2872271 ']' 00:04:38.321 21:29:46 rpc -- common/autotest_common.sh@952 -- # kill -0 2872271 00:04:38.321 21:29:46 rpc -- common/autotest_common.sh@953 -- # uname 00:04:38.321 21:29:46 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:38.321 21:29:46 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2872271 00:04:38.321 21:29:46 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:38.321 21:29:46 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:38.321 21:29:46 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2872271' 00:04:38.321 killing process with pid 2872271 00:04:38.321 21:29:46 rpc -- common/autotest_common.sh@967 -- # kill 2872271 00:04:38.321 21:29:46 rpc -- common/autotest_common.sh@972 -- # wait 2872271 00:04:38.579 00:04:38.579 real 0m2.454s 00:04:38.579 user 0m3.188s 00:04:38.579 sys 0m0.652s 00:04:38.579 21:29:46 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.579 21:29:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.579 ************************************ 00:04:38.579 END TEST rpc 00:04:38.579 ************************************ 00:04:38.580 21:29:46 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:38.580 21:29:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.580 21:29:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.580 21:29:46 -- common/autotest_common.sh@10 -- # set +x 00:04:38.580 ************************************ 00:04:38.580 START TEST skip_rpc 00:04:38.580 ************************************ 00:04:38.580 21:29:46 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:38.838 * Looking for test storage... 00:04:38.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:38.838 21:29:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:38.838 21:29:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:38.838 21:29:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:38.838 21:29:46 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.838 21:29:46 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.838 21:29:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.838 ************************************ 00:04:38.838 START TEST skip_rpc 00:04:38.838 ************************************ 00:04:38.838 21:29:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:38.838 21:29:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2872906 00:04:38.838 21:29:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:38.838 21:29:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.838 21:29:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:38.838 [2024-07-24 21:29:46.818919] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:04:38.838 [2024-07-24 21:29:46.818962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872906 ] 00:04:38.838 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.838 [2024-07-24 21:29:46.870609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.838 [2024-07-24 21:29:46.942888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.106 21:29:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:44.106 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:44.106 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:44.106 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:44.106 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.106 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:44.106 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.106 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:44.106 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.106 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.106 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:44.106 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2872906 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2872906 ']' 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2872906 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2872906 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2872906' 00:04:44.107 killing process with pid 2872906 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2872906 00:04:44.107 21:29:51 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2872906 00:04:44.107 00:04:44.107 real 0m5.364s 00:04:44.107 user 0m5.142s 00:04:44.107 sys 0m0.251s 00:04:44.107 21:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.107 21:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.107 ************************************ 00:04:44.107 END TEST skip_rpc 00:04:44.107 ************************************ 00:04:44.107 21:29:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:44.107 21:29:52 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.107 21:29:52 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.107 21:29:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.107 ************************************ 00:04:44.107 START TEST skip_rpc_with_json 00:04:44.107 ************************************ 00:04:44.107 21:29:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:44.107 21:29:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:44.107 21:29:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2873853 00:04:44.107 21:29:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.107 21:29:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.107 21:29:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2873853 00:04:44.107 21:29:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2873853 ']' 00:04:44.107 21:29:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.107 21:29:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.107 21:29:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.107 21:29:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.107 21:29:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.365 [2024-07-24 21:29:52.250836] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:04:44.365 [2024-07-24 21:29:52.250877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873853 ] 00:04:44.365 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.365 [2024-07-24 21:29:52.303258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.365 [2024-07-24 21:29:52.382828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.932 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.932 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:44.932 21:29:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:45.191 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.191 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.191 [2024-07-24 21:29:53.051591] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:45.191 request: 00:04:45.191 { 00:04:45.191 "trtype": "tcp", 00:04:45.191 "method": "nvmf_get_transports", 00:04:45.191 "req_id": 1 00:04:45.191 } 00:04:45.191 Got JSON-RPC error response 00:04:45.191 response: 00:04:45.191 { 00:04:45.191 "code": -19, 00:04:45.191 "message": "No such device" 00:04:45.191 } 00:04:45.191 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:45.191 21:29:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:45.191 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.191 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.191 [2024-07-24 21:29:53.063692] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.191 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.191 21:29:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:45.191 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.191 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.191 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.191 21:29:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:45.191 { 00:04:45.191 "subsystems": [ 00:04:45.191 { 00:04:45.191 "subsystem": "vfio_user_target", 00:04:45.191 "config": null 00:04:45.191 }, 00:04:45.191 { 00:04:45.191 "subsystem": "keyring", 00:04:45.191 "config": [] 00:04:45.191 }, 00:04:45.191 { 00:04:45.191 "subsystem": "iobuf", 00:04:45.191 "config": [ 00:04:45.191 { 00:04:45.191 "method": "iobuf_set_options", 00:04:45.191 "params": { 00:04:45.191 "small_pool_count": 8192, 00:04:45.192 "large_pool_count": 1024, 00:04:45.192 "small_bufsize": 8192, 00:04:45.192 "large_bufsize": 135168 00:04:45.192 } 00:04:45.192 } 00:04:45.192 ] 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "subsystem": "sock", 00:04:45.192 "config": [ 00:04:45.192 { 00:04:45.192 "method": "sock_set_default_impl", 00:04:45.192 "params": { 00:04:45.192 "impl_name": "posix" 00:04:45.192 } 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "method": "sock_impl_set_options", 00:04:45.192 "params": { 00:04:45.192 "impl_name": "ssl", 00:04:45.192 "recv_buf_size": 4096, 00:04:45.192 "send_buf_size": 4096, 00:04:45.192 "enable_recv_pipe": true, 00:04:45.192 "enable_quickack": false, 00:04:45.192 "enable_placement_id": 0, 00:04:45.192 "enable_zerocopy_send_server": true, 00:04:45.192 "enable_zerocopy_send_client": false, 00:04:45.192 "zerocopy_threshold": 0, 00:04:45.192 "tls_version": 0, 00:04:45.192 "enable_ktls": false 00:04:45.192 } 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "method": "sock_impl_set_options", 00:04:45.192 "params": { 00:04:45.192 "impl_name": "posix", 00:04:45.192 "recv_buf_size": 2097152, 00:04:45.192 "send_buf_size": 2097152, 00:04:45.192 "enable_recv_pipe": true, 00:04:45.192 "enable_quickack": false, 00:04:45.192 "enable_placement_id": 0, 00:04:45.192 "enable_zerocopy_send_server": true, 00:04:45.192 "enable_zerocopy_send_client": false, 00:04:45.192 "zerocopy_threshold": 0, 00:04:45.192 "tls_version": 0, 00:04:45.192 "enable_ktls": false 00:04:45.192 } 00:04:45.192 } 00:04:45.192 ] 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "subsystem": "vmd", 00:04:45.192 "config": [] 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "subsystem": "accel", 00:04:45.192 "config": [ 00:04:45.192 { 00:04:45.192 "method": "accel_set_options", 00:04:45.192 "params": { 00:04:45.192 "small_cache_size": 128, 00:04:45.192 "large_cache_size": 16, 00:04:45.192 "task_count": 2048, 00:04:45.192 "sequence_count": 2048, 00:04:45.192 "buf_count": 2048 00:04:45.192 } 00:04:45.192 } 00:04:45.192 ] 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "subsystem": "bdev", 00:04:45.192 "config": [ 00:04:45.192 { 00:04:45.192 "method": "bdev_set_options", 00:04:45.192 "params": { 00:04:45.192 "bdev_io_pool_size": 65535, 00:04:45.192 "bdev_io_cache_size": 256, 00:04:45.192 "bdev_auto_examine": true, 00:04:45.192 "iobuf_small_cache_size": 128, 00:04:45.192 "iobuf_large_cache_size": 16 00:04:45.192 } 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "method": "bdev_raid_set_options", 00:04:45.192 "params": { 00:04:45.192 "process_window_size_kb": 1024, 00:04:45.192 "process_max_bandwidth_mb_sec": 0 00:04:45.192 } 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "method": "bdev_iscsi_set_options", 00:04:45.192 "params": { 00:04:45.192 "timeout_sec": 30 00:04:45.192 } 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "method": "bdev_nvme_set_options", 00:04:45.192 "params": { 00:04:45.192 "action_on_timeout": "none", 00:04:45.192 "timeout_us": 0, 00:04:45.192 "timeout_admin_us": 0, 00:04:45.192 "keep_alive_timeout_ms": 10000, 00:04:45.192 "arbitration_burst": 0, 00:04:45.192 "low_priority_weight": 0, 00:04:45.192 "medium_priority_weight": 0, 00:04:45.192 "high_priority_weight": 0, 00:04:45.192 "nvme_adminq_poll_period_us": 10000, 00:04:45.192 "nvme_ioq_poll_period_us": 0, 00:04:45.192 "io_queue_requests": 0, 00:04:45.192 "delay_cmd_submit": true, 00:04:45.192 "transport_retry_count": 4, 00:04:45.192 "bdev_retry_count": 3, 00:04:45.192 "transport_ack_timeout": 0, 00:04:45.192 "ctrlr_loss_timeout_sec": 0, 00:04:45.192 "reconnect_delay_sec": 0, 00:04:45.192 "fast_io_fail_timeout_sec": 0, 00:04:45.192 "disable_auto_failback": false, 00:04:45.192 "generate_uuids": false, 00:04:45.192 "transport_tos": 0, 00:04:45.192 "nvme_error_stat": false, 00:04:45.192 "rdma_srq_size": 0, 00:04:45.192 "io_path_stat": false, 00:04:45.192 "allow_accel_sequence": false, 00:04:45.192 "rdma_max_cq_size": 0, 00:04:45.192 "rdma_cm_event_timeout_ms": 0, 00:04:45.192 "dhchap_digests": [ 00:04:45.192 "sha256", 00:04:45.192 "sha384", 00:04:45.192 "sha512" 00:04:45.192 ], 00:04:45.192 "dhchap_dhgroups": [ 00:04:45.192 "null", 00:04:45.192 "ffdhe2048", 00:04:45.192 "ffdhe3072", 00:04:45.192 "ffdhe4096", 00:04:45.192 "ffdhe6144", 00:04:45.192 "ffdhe8192" 00:04:45.192 ] 00:04:45.192 } 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "method": "bdev_nvme_set_hotplug", 00:04:45.192 "params": { 00:04:45.192 "period_us": 100000, 00:04:45.192 "enable": false 00:04:45.192 } 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "method": "bdev_wait_for_examine" 00:04:45.192 } 00:04:45.192 ] 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "subsystem": "scsi", 00:04:45.192 "config": null 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "subsystem": "scheduler", 00:04:45.192 "config": [ 00:04:45.192 { 00:04:45.192 "method": "framework_set_scheduler", 00:04:45.192 "params": { 00:04:45.192 "name": "static" 00:04:45.192 } 00:04:45.192 } 00:04:45.192 ] 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "subsystem": "vhost_scsi", 00:04:45.192 "config": [] 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "subsystem": "vhost_blk", 00:04:45.192 "config": [] 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "subsystem": "ublk", 00:04:45.192 "config": [] 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "subsystem": "nbd", 00:04:45.192 "config": [] 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "subsystem": "nvmf", 00:04:45.192 "config": [ 00:04:45.192 { 00:04:45.192 "method": "nvmf_set_config", 00:04:45.192 "params": { 00:04:45.192 "discovery_filter": "match_any", 00:04:45.192 "admin_cmd_passthru": { 00:04:45.192 "identify_ctrlr": false 00:04:45.192 } 00:04:45.192 } 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "method": "nvmf_set_max_subsystems", 00:04:45.192 "params": { 00:04:45.192 "max_subsystems": 1024 00:04:45.192 } 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "method": "nvmf_set_crdt", 00:04:45.192 "params": { 00:04:45.192 "crdt1": 0, 00:04:45.192 "crdt2": 0, 00:04:45.192 "crdt3": 0 00:04:45.192 } 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "method": "nvmf_create_transport", 00:04:45.192 "params": { 00:04:45.192 "trtype": "TCP", 00:04:45.192 "max_queue_depth": 128, 00:04:45.192 "max_io_qpairs_per_ctrlr": 127, 00:04:45.192 "in_capsule_data_size": 4096, 00:04:45.192 "max_io_size": 131072, 00:04:45.192 "io_unit_size": 131072, 00:04:45.192 "max_aq_depth": 128, 00:04:45.192 "num_shared_buffers": 511, 00:04:45.192 "buf_cache_size": 4294967295, 00:04:45.192 "dif_insert_or_strip": false, 00:04:45.192 "zcopy": false, 00:04:45.192 "c2h_success": true, 00:04:45.192 "sock_priority": 0, 00:04:45.192 "abort_timeout_sec": 1, 00:04:45.192 "ack_timeout": 0, 00:04:45.192 "data_wr_pool_size": 0 00:04:45.192 } 00:04:45.192 } 00:04:45.192 ] 00:04:45.192 }, 00:04:45.192 { 00:04:45.192 "subsystem": "iscsi", 00:04:45.192 "config": [ 00:04:45.192 { 00:04:45.192 "method": "iscsi_set_options", 00:04:45.192 "params": { 00:04:45.192 "node_base": "iqn.2016-06.io.spdk", 00:04:45.192 "max_sessions": 128, 00:04:45.192 "max_connections_per_session": 2, 00:04:45.192 "max_queue_depth": 64, 00:04:45.192 "default_time2wait": 2, 00:04:45.192 "default_time2retain": 20, 00:04:45.192 "first_burst_length": 8192, 00:04:45.192 "immediate_data": true, 00:04:45.192 "allow_duplicated_isid": false, 00:04:45.192 "error_recovery_level": 0, 00:04:45.192 "nop_timeout": 60, 00:04:45.192 "nop_in_interval": 30, 00:04:45.192 "disable_chap": false, 00:04:45.192 "require_chap": false, 00:04:45.192 "mutual_chap": false, 00:04:45.192 "chap_group": 0, 00:04:45.192 "max_large_datain_per_connection": 64, 00:04:45.192 "max_r2t_per_connection": 4, 00:04:45.192 "pdu_pool_size": 36864, 00:04:45.192 "immediate_data_pool_size": 16384, 00:04:45.192 "data_out_pool_size": 2048 00:04:45.192 } 00:04:45.192 } 00:04:45.192 ] 00:04:45.192 } 00:04:45.192 ] 00:04:45.192 } 00:04:45.192 21:29:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:45.192 21:29:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2873853 00:04:45.192 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2873853 ']' 00:04:45.192 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2873853 00:04:45.192 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:45.192 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.192 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2873853 00:04:45.192 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.193 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.193 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2873853' 00:04:45.193 killing process with pid 2873853 00:04:45.193 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2873853 00:04:45.193 21:29:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2873853 00:04:45.760 21:29:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2874093 00:04:45.760 21:29:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:45.760 21:29:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2874093 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2874093 ']' 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2874093 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2874093 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2874093' 00:04:51.028 killing process with pid 2874093 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2874093 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2874093 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:51.028 00:04:51.028 real 0m6.743s 00:04:51.028 user 0m6.597s 00:04:51.028 sys 0m0.564s 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.028 21:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.028 ************************************ 00:04:51.028 END TEST skip_rpc_with_json 00:04:51.028 ************************************ 00:04:51.028 21:29:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:51.028 21:29:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.028 21:29:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.028 21:29:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.028 ************************************ 00:04:51.028 START TEST skip_rpc_with_delay 00:04:51.028 ************************************ 00:04:51.028 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:51.028 21:29:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.028 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:51.028 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.028 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.028 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.028 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.028 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.028 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.028 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.028 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.028 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:51.029 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.029 [2024-07-24 21:29:59.064629] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:51.029 [2024-07-24 21:29:59.064690] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:51.029 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:51.029 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.029 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:51.029 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.029 00:04:51.029 real 0m0.068s 00:04:51.029 user 0m0.044s 00:04:51.029 sys 0m0.023s 00:04:51.029 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.029 21:29:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:51.029 ************************************ 00:04:51.029 END TEST skip_rpc_with_delay 00:04:51.029 ************************************ 00:04:51.029 21:29:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:51.029 21:29:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:51.029 21:29:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:51.029 21:29:59 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.029 21:29:59 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.029 21:29:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.286 ************************************ 00:04:51.286 START TEST exit_on_failed_rpc_init 00:04:51.286 ************************************ 00:04:51.286 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:51.286 21:29:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2875066 00:04:51.286 21:29:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.286 21:29:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2875066 00:04:51.286 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2875066 ']' 00:04:51.286 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.286 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.286 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.286 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.286 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.286 [2024-07-24 21:29:59.198270] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:04:51.286 [2024-07-24 21:29:59.198311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875066 ] 00:04:51.286 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.286 [2024-07-24 21:29:59.250682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.286 [2024-07-24 21:29:59.330003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.222 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.222 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:52.222 21:29:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.222 21:29:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.222 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:52.222 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.222 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.222 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.222 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.222 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.222 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.222 21:29:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.222 [2024-07-24 21:30:00.052333] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:04:52.222 [2024-07-24 21:30:00.052392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875241 ] 00:04:52.222 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.222 [2024-07-24 21:30:00.103910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.222 [2024-07-24 21:30:00.179576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.222 [2024-07-24 21:30:00.179640] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:52.222 [2024-07-24 21:30:00.179649] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:52.222 [2024-07-24 21:30:00.179655] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2875066 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2875066 ']' 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2875066 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2875066 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2875066' 00:04:52.222 killing process with pid 2875066 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2875066 00:04:52.222 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2875066 00:04:52.502 00:04:52.502 real 0m1.468s 00:04:52.502 user 0m1.704s 00:04:52.502 sys 0m0.392s 00:04:52.502 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.502 21:30:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:52.502 ************************************ 00:04:52.502 END TEST exit_on_failed_rpc_init 00:04:52.502 ************************************ 00:04:52.761 21:30:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.761 00:04:52.761 real 0m13.995s 00:04:52.761 user 0m13.620s 00:04:52.761 sys 0m1.475s 00:04:52.761 21:30:00 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.761 21:30:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.761 ************************************ 00:04:52.761 END TEST skip_rpc 00:04:52.761 ************************************ 00:04:52.761 21:30:00 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:52.761 21:30:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.761 21:30:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.761 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:04:52.761 ************************************ 00:04:52.761 START TEST rpc_client 00:04:52.761 ************************************ 00:04:52.761 21:30:00 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:52.761 * Looking for test storage... 00:04:52.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:52.761 21:30:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:52.761 OK 00:04:52.761 21:30:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:52.761 00:04:52.761 real 0m0.105s 00:04:52.761 user 0m0.054s 00:04:52.761 sys 0m0.058s 00:04:52.761 21:30:00 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.761 21:30:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:52.761 ************************************ 00:04:52.761 END TEST rpc_client 00:04:52.761 ************************************ 00:04:52.761 21:30:00 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:52.761 21:30:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.761 21:30:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.761 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:04:52.761 ************************************ 00:04:52.761 START TEST json_config 00:04:52.761 ************************************ 00:04:52.761 21:30:00 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:53.020 21:30:00 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.020 21:30:00 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:53.020 21:30:00 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.020 21:30:00 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.020 21:30:00 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.020 21:30:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.020 21:30:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.020 21:30:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.020 21:30:00 json_config -- paths/export.sh@5 -- # export PATH 00:04:53.021 21:30:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.021 21:30:00 json_config -- nvmf/common.sh@47 -- # : 0 00:04:53.021 21:30:00 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:53.021 21:30:00 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:53.021 21:30:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.021 21:30:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.021 21:30:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.021 21:30:00 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:53.021 21:30:00 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:53.021 21:30:00 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:53.021 INFO: JSON configuration test init 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:53.021 21:30:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.021 21:30:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:53.021 21:30:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.021 21:30:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.021 21:30:00 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:53.021 21:30:00 json_config -- json_config/common.sh@9 -- # local app=target 00:04:53.021 21:30:00 json_config -- json_config/common.sh@10 -- # shift 00:04:53.021 21:30:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:53.021 21:30:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:53.021 21:30:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:53.021 21:30:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.021 21:30:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.021 21:30:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2875482 00:04:53.021 21:30:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:53.021 Waiting for target to run... 00:04:53.021 21:30:00 json_config -- json_config/common.sh@25 -- # waitforlisten 2875482 /var/tmp/spdk_tgt.sock 00:04:53.021 21:30:00 json_config -- common/autotest_common.sh@829 -- # '[' -z 2875482 ']' 00:04:53.021 21:30:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:53.021 21:30:00 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.021 21:30:00 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.021 21:30:00 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.021 21:30:00 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.021 21:30:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.021 [2024-07-24 21:30:01.031259] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:04:53.021 [2024-07-24 21:30:01.031308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875482 ] 00:04:53.021 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.588 [2024-07-24 21:30:01.465272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.588 [2024-07-24 21:30:01.556321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.847 21:30:01 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.847 21:30:01 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:53.847 21:30:01 json_config -- json_config/common.sh@26 -- # echo '' 00:04:53.847 00:04:53.847 21:30:01 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:53.847 21:30:01 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:53.847 21:30:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.847 21:30:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.847 21:30:01 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:53.847 21:30:01 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:53.847 21:30:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.847 21:30:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.847 21:30:01 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:53.847 21:30:01 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:53.847 21:30:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:57.136 21:30:04 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:57.137 21:30:04 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:57.137 21:30:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.137 21:30:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.137 21:30:04 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:57.137 21:30:04 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:57.137 21:30:04 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:57.137 21:30:04 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:57.137 21:30:04 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:57.137 21:30:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@51 -- # sort 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:57.137 21:30:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.137 21:30:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:57.137 21:30:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.137 21:30:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:57.137 21:30:05 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:57.137 21:30:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:57.397 MallocForNvmf0 00:04:57.397 21:30:05 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:57.397 21:30:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:57.397 MallocForNvmf1 00:04:57.397 21:30:05 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:57.397 21:30:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:57.656 [2024-07-24 21:30:05.664652] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:57.656 21:30:05 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:57.656 21:30:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:57.915 21:30:05 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:57.915 21:30:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:57.915 21:30:06 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:57.915 21:30:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:58.175 21:30:06 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:58.175 21:30:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:58.473 [2024-07-24 21:30:06.346791] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:58.473 21:30:06 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:58.473 21:30:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.473 21:30:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.473 21:30:06 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:58.473 21:30:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.473 21:30:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.473 21:30:06 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:58.473 21:30:06 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:58.473 21:30:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:58.733 MallocBdevForConfigChangeCheck 00:04:58.733 21:30:06 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:58.733 21:30:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.733 21:30:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.733 21:30:06 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:58.733 21:30:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:58.992 21:30:06 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:58.992 INFO: shutting down applications... 00:04:58.992 21:30:06 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:58.992 21:30:06 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:58.992 21:30:06 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:58.992 21:30:06 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:00.901 Calling clear_iscsi_subsystem 00:05:00.901 Calling clear_nvmf_subsystem 00:05:00.901 Calling clear_nbd_subsystem 00:05:00.901 Calling clear_ublk_subsystem 00:05:00.901 Calling clear_vhost_blk_subsystem 00:05:00.901 Calling clear_vhost_scsi_subsystem 00:05:00.901 Calling clear_bdev_subsystem 00:05:00.901 21:30:08 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:00.901 21:30:08 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:00.901 21:30:08 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:00.901 21:30:08 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:00.901 21:30:08 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:00.901 21:30:08 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:00.901 21:30:08 json_config -- json_config/json_config.sh@349 -- # break 00:05:00.901 21:30:08 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:00.901 21:30:08 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:00.901 21:30:08 json_config -- json_config/common.sh@31 -- # local app=target 00:05:00.901 21:30:08 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:00.901 21:30:08 json_config -- json_config/common.sh@35 -- # [[ -n 2875482 ]] 00:05:00.901 21:30:08 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2875482 00:05:00.901 21:30:08 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:00.901 21:30:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.901 21:30:08 json_config -- json_config/common.sh@41 -- # kill -0 2875482 00:05:00.901 21:30:08 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.471 21:30:09 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.471 21:30:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.471 21:30:09 json_config -- json_config/common.sh@41 -- # kill -0 2875482 00:05:01.471 21:30:09 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:01.471 21:30:09 json_config -- json_config/common.sh@43 -- # break 00:05:01.471 21:30:09 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:01.471 21:30:09 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:01.471 SPDK target shutdown done 00:05:01.471 21:30:09 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:01.471 INFO: relaunching applications... 00:05:01.471 21:30:09 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.471 21:30:09 json_config -- json_config/common.sh@9 -- # local app=target 00:05:01.471 21:30:09 json_config -- json_config/common.sh@10 -- # shift 00:05:01.471 21:30:09 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.471 21:30:09 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.471 21:30:09 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.471 21:30:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.471 21:30:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.471 21:30:09 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2877383 00:05:01.471 21:30:09 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.471 Waiting for target to run... 00:05:01.471 21:30:09 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.471 21:30:09 json_config -- json_config/common.sh@25 -- # waitforlisten 2877383 /var/tmp/spdk_tgt.sock 00:05:01.471 21:30:09 json_config -- common/autotest_common.sh@829 -- # '[' -z 2877383 ']' 00:05:01.471 21:30:09 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.471 21:30:09 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.471 21:30:09 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.471 21:30:09 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.471 21:30:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.471 [2024-07-24 21:30:09.401604] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:01.471 [2024-07-24 21:30:09.401664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2877383 ] 00:05:01.471 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.730 [2024-07-24 21:30:09.831448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.990 [2024-07-24 21:30:09.920765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.292 [2024-07-24 21:30:12.934680] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:05.292 [2024-07-24 21:30:12.966998] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:05.551 21:30:13 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.551 21:30:13 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:05.551 21:30:13 json_config -- json_config/common.sh@26 -- # echo '' 00:05:05.551 00:05:05.551 21:30:13 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:05.551 21:30:13 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:05.551 INFO: Checking if target configuration is the same... 00:05:05.551 21:30:13 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.551 21:30:13 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:05.551 21:30:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.551 + '[' 2 -ne 2 ']' 00:05:05.551 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:05.551 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:05.551 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.551 +++ basename /dev/fd/62 00:05:05.551 ++ mktemp /tmp/62.XXX 00:05:05.551 + tmp_file_1=/tmp/62.zAv 00:05:05.551 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.551 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:05.551 + tmp_file_2=/tmp/spdk_tgt_config.json.MhJ 00:05:05.551 + ret=0 00:05:05.551 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.810 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.069 + diff -u /tmp/62.zAv /tmp/spdk_tgt_config.json.MhJ 00:05:06.069 + echo 'INFO: JSON config files are the same' 00:05:06.069 INFO: JSON config files are the same 00:05:06.069 + rm /tmp/62.zAv /tmp/spdk_tgt_config.json.MhJ 00:05:06.069 + exit 0 00:05:06.069 21:30:13 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:06.069 21:30:13 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:06.069 INFO: changing configuration and checking if this can be detected... 00:05:06.069 21:30:13 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:06.069 21:30:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:06.069 21:30:14 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:06.069 21:30:14 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.069 21:30:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.069 + '[' 2 -ne 2 ']' 00:05:06.069 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:06.069 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:06.069 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:06.069 +++ basename /dev/fd/62 00:05:06.069 ++ mktemp /tmp/62.XXX 00:05:06.069 + tmp_file_1=/tmp/62.vLk 00:05:06.069 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.069 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:06.069 + tmp_file_2=/tmp/spdk_tgt_config.json.mY9 00:05:06.069 + ret=0 00:05:06.069 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.329 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.589 + diff -u /tmp/62.vLk /tmp/spdk_tgt_config.json.mY9 00:05:06.589 + ret=1 00:05:06.589 + echo '=== Start of file: /tmp/62.vLk ===' 00:05:06.589 + cat /tmp/62.vLk 00:05:06.589 + echo '=== End of file: /tmp/62.vLk ===' 00:05:06.589 + echo '' 00:05:06.589 + echo '=== Start of file: /tmp/spdk_tgt_config.json.mY9 ===' 00:05:06.589 + cat /tmp/spdk_tgt_config.json.mY9 00:05:06.589 + echo '=== End of file: /tmp/spdk_tgt_config.json.mY9 ===' 00:05:06.589 + echo '' 00:05:06.589 + rm /tmp/62.vLk /tmp/spdk_tgt_config.json.mY9 00:05:06.589 + exit 1 00:05:06.589 21:30:14 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:06.589 INFO: configuration change detected. 00:05:06.589 21:30:14 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:06.589 21:30:14 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:06.589 21:30:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:06.589 21:30:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.589 21:30:14 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:06.589 21:30:14 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:06.589 21:30:14 json_config -- json_config/json_config.sh@321 -- # [[ -n 2877383 ]] 00:05:06.589 21:30:14 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:06.589 21:30:14 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:06.589 21:30:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:06.589 21:30:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.589 21:30:14 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:06.589 21:30:14 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:06.590 21:30:14 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:06.590 21:30:14 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:06.590 21:30:14 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:06.590 21:30:14 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:06.590 21:30:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:06.590 21:30:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.590 21:30:14 json_config -- json_config/json_config.sh@327 -- # killprocess 2877383 00:05:06.590 21:30:14 json_config -- common/autotest_common.sh@948 -- # '[' -z 2877383 ']' 00:05:06.590 21:30:14 json_config -- common/autotest_common.sh@952 -- # kill -0 2877383 00:05:06.590 21:30:14 json_config -- common/autotest_common.sh@953 -- # uname 00:05:06.590 21:30:14 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.590 21:30:14 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2877383 00:05:06.590 21:30:14 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.590 21:30:14 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.590 21:30:14 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2877383' 00:05:06.590 killing process with pid 2877383 00:05:06.590 21:30:14 json_config -- common/autotest_common.sh@967 -- # kill 2877383 00:05:06.590 21:30:14 json_config -- common/autotest_common.sh@972 -- # wait 2877383 00:05:07.969 21:30:16 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.969 21:30:16 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:07.969 21:30:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.969 21:30:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.969 21:30:16 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:07.969 21:30:16 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:07.969 INFO: Success 00:05:07.969 00:05:07.969 real 0m15.202s 00:05:07.969 user 0m15.827s 00:05:07.970 sys 0m2.055s 00:05:07.970 21:30:16 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.970 21:30:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.970 ************************************ 00:05:07.970 END TEST json_config 00:05:07.970 ************************************ 00:05:08.230 21:30:16 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:08.230 21:30:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.230 21:30:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.230 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:05:08.230 ************************************ 00:05:08.230 START TEST json_config_extra_key 00:05:08.230 ************************************ 00:05:08.230 21:30:16 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:08.230 21:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:08.230 21:30:16 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.230 21:30:16 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.230 21:30:16 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.230 21:30:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.230 21:30:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.230 21:30:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.230 21:30:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:08.230 21:30:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:08.230 21:30:16 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:08.230 21:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:08.230 21:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:08.230 21:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:08.230 21:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:08.230 21:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:08.230 21:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:08.230 21:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:08.230 21:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:08.231 21:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:08.231 21:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:08.231 21:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:08.231 INFO: launching applications... 00:05:08.231 21:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:08.231 21:30:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:08.231 21:30:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:08.231 21:30:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:08.231 21:30:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:08.231 21:30:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:08.231 21:30:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.231 21:30:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.231 21:30:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2878788 00:05:08.231 21:30:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:08.231 Waiting for target to run... 00:05:08.231 21:30:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2878788 /var/tmp/spdk_tgt.sock 00:05:08.231 21:30:16 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2878788 ']' 00:05:08.231 21:30:16 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:08.231 21:30:16 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:08.231 21:30:16 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.231 21:30:16 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:08.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:08.231 21:30:16 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.231 21:30:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:08.231 [2024-07-24 21:30:16.276951] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:08.231 [2024-07-24 21:30:16.277006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878788 ] 00:05:08.231 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.490 [2024-07-24 21:30:16.540893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.750 [2024-07-24 21:30:16.609442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.011 21:30:17 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.011 21:30:17 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:09.011 21:30:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:09.011 00:05:09.011 21:30:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:09.011 INFO: shutting down applications... 00:05:09.011 21:30:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:09.011 21:30:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:09.011 21:30:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:09.011 21:30:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2878788 ]] 00:05:09.011 21:30:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2878788 00:05:09.011 21:30:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:09.011 21:30:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.011 21:30:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2878788 00:05:09.011 21:30:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.582 21:30:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.582 21:30:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.582 21:30:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2878788 00:05:09.582 21:30:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:09.582 21:30:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:09.582 21:30:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:09.582 21:30:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:09.582 SPDK target shutdown done 00:05:09.582 21:30:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:09.582 Success 00:05:09.582 00:05:09.582 real 0m1.449s 00:05:09.582 user 0m1.233s 00:05:09.582 sys 0m0.375s 00:05:09.582 21:30:17 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.582 21:30:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:09.582 ************************************ 00:05:09.582 END TEST json_config_extra_key 00:05:09.582 ************************************ 00:05:09.582 21:30:17 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:09.582 21:30:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.582 21:30:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.582 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.582 ************************************ 00:05:09.582 START TEST alias_rpc 00:05:09.582 ************************************ 00:05:09.582 21:30:17 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:09.843 * Looking for test storage... 00:05:09.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:09.843 21:30:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:09.843 21:30:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2879142 00:05:09.843 21:30:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2879142 00:05:09.843 21:30:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.843 21:30:17 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2879142 ']' 00:05:09.843 21:30:17 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.843 21:30:17 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.843 21:30:17 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.843 21:30:17 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.843 21:30:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.843 [2024-07-24 21:30:17.791750] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:09.843 [2024-07-24 21:30:17.791823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879142 ] 00:05:09.843 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.843 [2024-07-24 21:30:17.844645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.843 [2024-07-24 21:30:17.918317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.782 21:30:18 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.782 21:30:18 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:10.783 21:30:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:10.783 21:30:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2879142 00:05:10.783 21:30:18 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2879142 ']' 00:05:10.783 21:30:18 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2879142 00:05:10.783 21:30:18 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:10.783 21:30:18 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.783 21:30:18 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2879142 00:05:10.783 21:30:18 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.783 21:30:18 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.783 21:30:18 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2879142' 00:05:10.783 killing process with pid 2879142 00:05:10.783 21:30:18 alias_rpc -- common/autotest_common.sh@967 -- # kill 2879142 00:05:10.783 21:30:18 alias_rpc -- common/autotest_common.sh@972 -- # wait 2879142 00:05:11.042 00:05:11.042 real 0m1.491s 00:05:11.042 user 0m1.639s 00:05:11.042 sys 0m0.395s 00:05:11.042 21:30:19 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.042 21:30:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.042 ************************************ 00:05:11.042 END TEST alias_rpc 00:05:11.042 ************************************ 00:05:11.302 21:30:19 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:11.302 21:30:19 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:11.302 21:30:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.302 21:30:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.302 21:30:19 -- common/autotest_common.sh@10 -- # set +x 00:05:11.302 ************************************ 00:05:11.302 START TEST spdkcli_tcp 00:05:11.302 ************************************ 00:05:11.302 21:30:19 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:11.302 * Looking for test storage... 00:05:11.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:11.302 21:30:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:11.302 21:30:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:11.302 21:30:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:11.302 21:30:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:11.302 21:30:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:11.302 21:30:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:11.302 21:30:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:11.302 21:30:19 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.302 21:30:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.302 21:30:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2879495 00:05:11.302 21:30:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2879495 00:05:11.302 21:30:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:11.302 21:30:19 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2879495 ']' 00:05:11.302 21:30:19 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.302 21:30:19 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.302 21:30:19 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.302 21:30:19 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.302 21:30:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.302 [2024-07-24 21:30:19.350511] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:11.302 [2024-07-24 21:30:19.350557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879495 ] 00:05:11.302 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.302 [2024-07-24 21:30:19.404502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.562 [2024-07-24 21:30:19.480275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.562 [2024-07-24 21:30:19.480277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.129 21:30:20 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.129 21:30:20 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:12.129 21:30:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:12.129 21:30:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2879512 00:05:12.130 21:30:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:12.390 [ 00:05:12.390 "bdev_malloc_delete", 00:05:12.390 "bdev_malloc_create", 00:05:12.390 "bdev_null_resize", 00:05:12.390 "bdev_null_delete", 00:05:12.390 "bdev_null_create", 00:05:12.390 "bdev_nvme_cuse_unregister", 00:05:12.390 "bdev_nvme_cuse_register", 00:05:12.390 "bdev_opal_new_user", 00:05:12.390 "bdev_opal_set_lock_state", 00:05:12.390 "bdev_opal_delete", 00:05:12.390 "bdev_opal_get_info", 00:05:12.390 "bdev_opal_create", 00:05:12.390 "bdev_nvme_opal_revert", 00:05:12.390 "bdev_nvme_opal_init", 00:05:12.390 "bdev_nvme_send_cmd", 00:05:12.390 "bdev_nvme_get_path_iostat", 00:05:12.390 "bdev_nvme_get_mdns_discovery_info", 00:05:12.390 "bdev_nvme_stop_mdns_discovery", 00:05:12.390 "bdev_nvme_start_mdns_discovery", 00:05:12.390 "bdev_nvme_set_multipath_policy", 00:05:12.390 "bdev_nvme_set_preferred_path", 00:05:12.390 "bdev_nvme_get_io_paths", 00:05:12.390 "bdev_nvme_remove_error_injection", 00:05:12.390 "bdev_nvme_add_error_injection", 00:05:12.390 "bdev_nvme_get_discovery_info", 00:05:12.390 "bdev_nvme_stop_discovery", 00:05:12.390 "bdev_nvme_start_discovery", 00:05:12.390 "bdev_nvme_get_controller_health_info", 00:05:12.390 "bdev_nvme_disable_controller", 00:05:12.390 "bdev_nvme_enable_controller", 00:05:12.390 "bdev_nvme_reset_controller", 00:05:12.390 "bdev_nvme_get_transport_statistics", 00:05:12.390 "bdev_nvme_apply_firmware", 00:05:12.390 "bdev_nvme_detach_controller", 00:05:12.390 "bdev_nvme_get_controllers", 00:05:12.390 "bdev_nvme_attach_controller", 00:05:12.390 "bdev_nvme_set_hotplug", 00:05:12.390 "bdev_nvme_set_options", 00:05:12.390 "bdev_passthru_delete", 00:05:12.390 "bdev_passthru_create", 00:05:12.390 "bdev_lvol_set_parent_bdev", 00:05:12.390 "bdev_lvol_set_parent", 00:05:12.390 "bdev_lvol_check_shallow_copy", 00:05:12.390 "bdev_lvol_start_shallow_copy", 00:05:12.390 "bdev_lvol_grow_lvstore", 00:05:12.390 "bdev_lvol_get_lvols", 00:05:12.390 "bdev_lvol_get_lvstores", 00:05:12.390 "bdev_lvol_delete", 00:05:12.390 "bdev_lvol_set_read_only", 00:05:12.390 "bdev_lvol_resize", 00:05:12.390 "bdev_lvol_decouple_parent", 00:05:12.390 "bdev_lvol_inflate", 00:05:12.390 "bdev_lvol_rename", 00:05:12.390 "bdev_lvol_clone_bdev", 00:05:12.390 "bdev_lvol_clone", 00:05:12.390 "bdev_lvol_snapshot", 00:05:12.390 "bdev_lvol_create", 00:05:12.390 "bdev_lvol_delete_lvstore", 00:05:12.390 "bdev_lvol_rename_lvstore", 00:05:12.390 "bdev_lvol_create_lvstore", 00:05:12.390 "bdev_raid_set_options", 00:05:12.390 "bdev_raid_remove_base_bdev", 00:05:12.390 "bdev_raid_add_base_bdev", 00:05:12.390 "bdev_raid_delete", 00:05:12.390 "bdev_raid_create", 00:05:12.390 "bdev_raid_get_bdevs", 00:05:12.390 "bdev_error_inject_error", 00:05:12.390 "bdev_error_delete", 00:05:12.390 "bdev_error_create", 00:05:12.390 "bdev_split_delete", 00:05:12.390 "bdev_split_create", 00:05:12.390 "bdev_delay_delete", 00:05:12.390 "bdev_delay_create", 00:05:12.390 "bdev_delay_update_latency", 00:05:12.390 "bdev_zone_block_delete", 00:05:12.390 "bdev_zone_block_create", 00:05:12.390 "blobfs_create", 00:05:12.390 "blobfs_detect", 00:05:12.390 "blobfs_set_cache_size", 00:05:12.390 "bdev_aio_delete", 00:05:12.390 "bdev_aio_rescan", 00:05:12.390 "bdev_aio_create", 00:05:12.390 "bdev_ftl_set_property", 00:05:12.390 "bdev_ftl_get_properties", 00:05:12.390 "bdev_ftl_get_stats", 00:05:12.390 "bdev_ftl_unmap", 00:05:12.390 "bdev_ftl_unload", 00:05:12.390 "bdev_ftl_delete", 00:05:12.390 "bdev_ftl_load", 00:05:12.390 "bdev_ftl_create", 00:05:12.390 "bdev_virtio_attach_controller", 00:05:12.390 "bdev_virtio_scsi_get_devices", 00:05:12.390 "bdev_virtio_detach_controller", 00:05:12.390 "bdev_virtio_blk_set_hotplug", 00:05:12.390 "bdev_iscsi_delete", 00:05:12.390 "bdev_iscsi_create", 00:05:12.390 "bdev_iscsi_set_options", 00:05:12.390 "accel_error_inject_error", 00:05:12.390 "ioat_scan_accel_module", 00:05:12.390 "dsa_scan_accel_module", 00:05:12.390 "iaa_scan_accel_module", 00:05:12.390 "vfu_virtio_create_scsi_endpoint", 00:05:12.390 "vfu_virtio_scsi_remove_target", 00:05:12.390 "vfu_virtio_scsi_add_target", 00:05:12.390 "vfu_virtio_create_blk_endpoint", 00:05:12.390 "vfu_virtio_delete_endpoint", 00:05:12.391 "keyring_file_remove_key", 00:05:12.391 "keyring_file_add_key", 00:05:12.391 "keyring_linux_set_options", 00:05:12.391 "iscsi_get_histogram", 00:05:12.391 "iscsi_enable_histogram", 00:05:12.391 "iscsi_set_options", 00:05:12.391 "iscsi_get_auth_groups", 00:05:12.391 "iscsi_auth_group_remove_secret", 00:05:12.391 "iscsi_auth_group_add_secret", 00:05:12.391 "iscsi_delete_auth_group", 00:05:12.391 "iscsi_create_auth_group", 00:05:12.391 "iscsi_set_discovery_auth", 00:05:12.391 "iscsi_get_options", 00:05:12.391 "iscsi_target_node_request_logout", 00:05:12.391 "iscsi_target_node_set_redirect", 00:05:12.391 "iscsi_target_node_set_auth", 00:05:12.391 "iscsi_target_node_add_lun", 00:05:12.391 "iscsi_get_stats", 00:05:12.391 "iscsi_get_connections", 00:05:12.391 "iscsi_portal_group_set_auth", 00:05:12.391 "iscsi_start_portal_group", 00:05:12.391 "iscsi_delete_portal_group", 00:05:12.391 "iscsi_create_portal_group", 00:05:12.391 "iscsi_get_portal_groups", 00:05:12.391 "iscsi_delete_target_node", 00:05:12.391 "iscsi_target_node_remove_pg_ig_maps", 00:05:12.391 "iscsi_target_node_add_pg_ig_maps", 00:05:12.391 "iscsi_create_target_node", 00:05:12.391 "iscsi_get_target_nodes", 00:05:12.391 "iscsi_delete_initiator_group", 00:05:12.391 "iscsi_initiator_group_remove_initiators", 00:05:12.391 "iscsi_initiator_group_add_initiators", 00:05:12.391 "iscsi_create_initiator_group", 00:05:12.391 "iscsi_get_initiator_groups", 00:05:12.391 "nvmf_set_crdt", 00:05:12.391 "nvmf_set_config", 00:05:12.391 "nvmf_set_max_subsystems", 00:05:12.391 "nvmf_stop_mdns_prr", 00:05:12.391 "nvmf_publish_mdns_prr", 00:05:12.391 "nvmf_subsystem_get_listeners", 00:05:12.391 "nvmf_subsystem_get_qpairs", 00:05:12.391 "nvmf_subsystem_get_controllers", 00:05:12.391 "nvmf_get_stats", 00:05:12.391 "nvmf_get_transports", 00:05:12.391 "nvmf_create_transport", 00:05:12.391 "nvmf_get_targets", 00:05:12.391 "nvmf_delete_target", 00:05:12.391 "nvmf_create_target", 00:05:12.391 "nvmf_subsystem_allow_any_host", 00:05:12.391 "nvmf_subsystem_remove_host", 00:05:12.391 "nvmf_subsystem_add_host", 00:05:12.391 "nvmf_ns_remove_host", 00:05:12.391 "nvmf_ns_add_host", 00:05:12.391 "nvmf_subsystem_remove_ns", 00:05:12.391 "nvmf_subsystem_add_ns", 00:05:12.391 "nvmf_subsystem_listener_set_ana_state", 00:05:12.391 "nvmf_discovery_get_referrals", 00:05:12.391 "nvmf_discovery_remove_referral", 00:05:12.391 "nvmf_discovery_add_referral", 00:05:12.391 "nvmf_subsystem_remove_listener", 00:05:12.391 "nvmf_subsystem_add_listener", 00:05:12.391 "nvmf_delete_subsystem", 00:05:12.391 "nvmf_create_subsystem", 00:05:12.391 "nvmf_get_subsystems", 00:05:12.391 "env_dpdk_get_mem_stats", 00:05:12.391 "nbd_get_disks", 00:05:12.391 "nbd_stop_disk", 00:05:12.391 "nbd_start_disk", 00:05:12.391 "ublk_recover_disk", 00:05:12.391 "ublk_get_disks", 00:05:12.391 "ublk_stop_disk", 00:05:12.391 "ublk_start_disk", 00:05:12.391 "ublk_destroy_target", 00:05:12.391 "ublk_create_target", 00:05:12.391 "virtio_blk_create_transport", 00:05:12.391 "virtio_blk_get_transports", 00:05:12.391 "vhost_controller_set_coalescing", 00:05:12.391 "vhost_get_controllers", 00:05:12.391 "vhost_delete_controller", 00:05:12.391 "vhost_create_blk_controller", 00:05:12.391 "vhost_scsi_controller_remove_target", 00:05:12.391 "vhost_scsi_controller_add_target", 00:05:12.391 "vhost_start_scsi_controller", 00:05:12.391 "vhost_create_scsi_controller", 00:05:12.391 "thread_set_cpumask", 00:05:12.391 "framework_get_governor", 00:05:12.391 "framework_get_scheduler", 00:05:12.391 "framework_set_scheduler", 00:05:12.391 "framework_get_reactors", 00:05:12.391 "thread_get_io_channels", 00:05:12.391 "thread_get_pollers", 00:05:12.391 "thread_get_stats", 00:05:12.391 "framework_monitor_context_switch", 00:05:12.391 "spdk_kill_instance", 00:05:12.391 "log_enable_timestamps", 00:05:12.391 "log_get_flags", 00:05:12.391 "log_clear_flag", 00:05:12.391 "log_set_flag", 00:05:12.391 "log_get_level", 00:05:12.391 "log_set_level", 00:05:12.391 "log_get_print_level", 00:05:12.391 "log_set_print_level", 00:05:12.391 "framework_enable_cpumask_locks", 00:05:12.391 "framework_disable_cpumask_locks", 00:05:12.391 "framework_wait_init", 00:05:12.391 "framework_start_init", 00:05:12.391 "scsi_get_devices", 00:05:12.391 "bdev_get_histogram", 00:05:12.391 "bdev_enable_histogram", 00:05:12.391 "bdev_set_qos_limit", 00:05:12.391 "bdev_set_qd_sampling_period", 00:05:12.391 "bdev_get_bdevs", 00:05:12.391 "bdev_reset_iostat", 00:05:12.391 "bdev_get_iostat", 00:05:12.391 "bdev_examine", 00:05:12.391 "bdev_wait_for_examine", 00:05:12.391 "bdev_set_options", 00:05:12.391 "notify_get_notifications", 00:05:12.391 "notify_get_types", 00:05:12.391 "accel_get_stats", 00:05:12.391 "accel_set_options", 00:05:12.391 "accel_set_driver", 00:05:12.391 "accel_crypto_key_destroy", 00:05:12.391 "accel_crypto_keys_get", 00:05:12.391 "accel_crypto_key_create", 00:05:12.391 "accel_assign_opc", 00:05:12.391 "accel_get_module_info", 00:05:12.391 "accel_get_opc_assignments", 00:05:12.391 "vmd_rescan", 00:05:12.391 "vmd_remove_device", 00:05:12.391 "vmd_enable", 00:05:12.391 "sock_get_default_impl", 00:05:12.391 "sock_set_default_impl", 00:05:12.391 "sock_impl_set_options", 00:05:12.391 "sock_impl_get_options", 00:05:12.391 "iobuf_get_stats", 00:05:12.391 "iobuf_set_options", 00:05:12.391 "keyring_get_keys", 00:05:12.391 "framework_get_pci_devices", 00:05:12.391 "framework_get_config", 00:05:12.391 "framework_get_subsystems", 00:05:12.391 "vfu_tgt_set_base_path", 00:05:12.391 "trace_get_info", 00:05:12.391 "trace_get_tpoint_group_mask", 00:05:12.391 "trace_disable_tpoint_group", 00:05:12.391 "trace_enable_tpoint_group", 00:05:12.391 "trace_clear_tpoint_mask", 00:05:12.391 "trace_set_tpoint_mask", 00:05:12.391 "spdk_get_version", 00:05:12.391 "rpc_get_methods" 00:05:12.391 ] 00:05:12.391 21:30:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:12.391 21:30:20 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.391 21:30:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.391 21:30:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:12.391 21:30:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2879495 00:05:12.391 21:30:20 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2879495 ']' 00:05:12.391 21:30:20 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2879495 00:05:12.391 21:30:20 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:12.391 21:30:20 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.391 21:30:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2879495 00:05:12.391 21:30:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.391 21:30:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.391 21:30:20 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2879495' 00:05:12.391 killing process with pid 2879495 00:05:12.391 21:30:20 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2879495 00:05:12.391 21:30:20 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2879495 00:05:12.651 00:05:12.651 real 0m1.506s 00:05:12.651 user 0m2.805s 00:05:12.651 sys 0m0.428s 00:05:12.651 21:30:20 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.651 21:30:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.651 ************************************ 00:05:12.651 END TEST spdkcli_tcp 00:05:12.651 ************************************ 00:05:12.651 21:30:20 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.651 21:30:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.651 21:30:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.651 21:30:20 -- common/autotest_common.sh@10 -- # set +x 00:05:12.911 ************************************ 00:05:12.911 START TEST dpdk_mem_utility 00:05:12.911 ************************************ 00:05:12.911 21:30:20 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.911 * Looking for test storage... 00:05:12.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:12.912 21:30:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:12.912 21:30:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2879801 00:05:12.912 21:30:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2879801 00:05:12.912 21:30:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.912 21:30:20 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2879801 ']' 00:05:12.912 21:30:20 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.912 21:30:20 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.912 21:30:20 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.912 21:30:20 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.912 21:30:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.912 [2024-07-24 21:30:20.911914] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:12.912 [2024-07-24 21:30:20.911967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879801 ] 00:05:12.912 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.912 [2024-07-24 21:30:20.964430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.171 [2024-07-24 21:30:21.047250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.741 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.741 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:13.741 21:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:13.741 21:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:13.741 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.741 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.741 { 00:05:13.741 "filename": "/tmp/spdk_mem_dump.txt" 00:05:13.741 } 00:05:13.741 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.741 21:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:13.741 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:13.741 1 heaps totaling size 814.000000 MiB 00:05:13.741 size: 814.000000 MiB heap id: 0 00:05:13.741 end heaps---------- 00:05:13.741 8 mempools totaling size 598.116089 MiB 00:05:13.741 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:13.741 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:13.741 size: 84.521057 MiB name: bdev_io_2879801 00:05:13.741 size: 51.011292 MiB name: evtpool_2879801 00:05:13.741 size: 50.003479 MiB name: msgpool_2879801 00:05:13.741 size: 21.763794 MiB name: PDU_Pool 00:05:13.741 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:13.741 size: 0.026123 MiB name: Session_Pool 00:05:13.741 end mempools------- 00:05:13.741 6 memzones totaling size 4.142822 MiB 00:05:13.741 size: 1.000366 MiB name: RG_ring_0_2879801 00:05:13.741 size: 1.000366 MiB name: RG_ring_1_2879801 00:05:13.741 size: 1.000366 MiB name: RG_ring_4_2879801 00:05:13.741 size: 1.000366 MiB name: RG_ring_5_2879801 00:05:13.741 size: 0.125366 MiB name: RG_ring_2_2879801 00:05:13.741 size: 0.015991 MiB name: RG_ring_3_2879801 00:05:13.741 end memzones------- 00:05:13.741 21:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:13.741 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:13.741 list of free elements. size: 12.519348 MiB 00:05:13.741 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:13.741 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:13.741 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:13.741 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:13.741 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:13.741 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:13.741 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:13.741 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:13.741 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:13.741 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:13.741 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:13.741 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:13.741 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:13.741 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:13.741 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:13.741 list of standard malloc elements. size: 199.218079 MiB 00:05:13.741 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:13.741 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:13.741 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:13.741 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:13.741 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:13.741 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:13.741 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:13.741 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:13.741 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:13.741 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:13.741 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:13.741 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:13.741 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:13.741 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:13.741 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:13.741 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:13.741 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:13.741 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:13.741 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:13.741 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:13.741 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:13.741 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:13.741 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:13.741 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:13.741 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:13.741 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:13.741 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:13.741 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:13.741 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:13.741 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:13.741 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:13.741 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:13.741 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:13.741 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:13.741 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:13.741 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:13.741 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:13.741 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:13.741 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:13.741 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:13.741 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:13.742 list of memzone associated elements. size: 602.262573 MiB 00:05:13.742 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:13.742 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:13.742 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:13.742 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:13.742 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:13.742 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2879801_0 00:05:13.742 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:13.742 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2879801_0 00:05:13.742 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:13.742 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2879801_0 00:05:13.742 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:13.742 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:13.742 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:13.742 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:13.742 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:13.742 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2879801 00:05:13.742 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:13.742 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2879801 00:05:13.742 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:13.742 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2879801 00:05:13.742 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:13.742 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:13.742 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:13.742 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:13.742 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:13.742 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:13.742 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:13.742 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:13.742 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:13.742 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2879801 00:05:13.742 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:13.742 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2879801 00:05:13.742 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:13.742 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2879801 00:05:13.742 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:13.742 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2879801 00:05:13.742 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:13.742 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2879801 00:05:13.742 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:13.742 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:13.742 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:13.742 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:13.742 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:13.742 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:13.742 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:13.742 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2879801 00:05:13.742 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:13.742 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:13.742 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:13.742 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:13.742 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:13.742 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2879801 00:05:13.742 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:13.742 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:13.742 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:13.742 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2879801 00:05:13.742 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:13.742 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2879801 00:05:13.742 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:13.742 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:13.742 21:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:13.742 21:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2879801 00:05:13.742 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2879801 ']' 00:05:13.742 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2879801 00:05:13.742 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:13.742 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.742 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2879801 00:05:13.742 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.742 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.742 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2879801' 00:05:13.742 killing process with pid 2879801 00:05:13.742 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2879801 00:05:13.742 21:30:21 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2879801 00:05:14.313 00:05:14.313 real 0m1.365s 00:05:14.313 user 0m1.449s 00:05:14.313 sys 0m0.367s 00:05:14.313 21:30:22 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.313 21:30:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.313 ************************************ 00:05:14.313 END TEST dpdk_mem_utility 00:05:14.313 ************************************ 00:05:14.313 21:30:22 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:14.313 21:30:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.313 21:30:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.313 21:30:22 -- common/autotest_common.sh@10 -- # set +x 00:05:14.313 ************************************ 00:05:14.313 START TEST event 00:05:14.313 ************************************ 00:05:14.313 21:30:22 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:14.313 * Looking for test storage... 00:05:14.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:14.313 21:30:22 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:14.313 21:30:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:14.313 21:30:22 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:14.313 21:30:22 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:14.313 21:30:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.313 21:30:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.313 ************************************ 00:05:14.313 START TEST event_perf 00:05:14.313 ************************************ 00:05:14.313 21:30:22 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:14.313 Running I/O for 1 seconds...[2024-07-24 21:30:22.336865] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:14.313 [2024-07-24 21:30:22.336913] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880088 ] 00:05:14.313 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.313 [2024-07-24 21:30:22.391985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:14.573 [2024-07-24 21:30:22.471220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.573 [2024-07-24 21:30:22.471320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.573 [2024-07-24 21:30:22.471431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.573 [2024-07-24 21:30:22.471433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.559 Running I/O for 1 seconds... 00:05:15.559 lcore 0: 209016 00:05:15.559 lcore 1: 209014 00:05:15.559 lcore 2: 209015 00:05:15.559 lcore 3: 209016 00:05:15.559 done. 00:05:15.559 00:05:15.559 real 0m1.216s 00:05:15.559 user 0m4.146s 00:05:15.559 sys 0m0.065s 00:05:15.559 21:30:23 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.559 21:30:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.559 ************************************ 00:05:15.559 END TEST event_perf 00:05:15.559 ************************************ 00:05:15.559 21:30:23 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:15.560 21:30:23 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:15.560 21:30:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.560 21:30:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.560 ************************************ 00:05:15.560 START TEST event_reactor 00:05:15.560 ************************************ 00:05:15.560 21:30:23 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:15.560 [2024-07-24 21:30:23.621582] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:15.560 [2024-07-24 21:30:23.621648] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880348 ] 00:05:15.560 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.819 [2024-07-24 21:30:23.677719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.819 [2024-07-24 21:30:23.748989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.758 test_start 00:05:16.758 oneshot 00:05:16.758 tick 100 00:05:16.758 tick 100 00:05:16.758 tick 250 00:05:16.758 tick 100 00:05:16.758 tick 100 00:05:16.758 tick 100 00:05:16.758 tick 250 00:05:16.758 tick 500 00:05:16.758 tick 100 00:05:16.758 tick 100 00:05:16.758 tick 250 00:05:16.758 tick 100 00:05:16.758 tick 100 00:05:16.758 test_end 00:05:16.758 00:05:16.758 real 0m1.214s 00:05:16.758 user 0m1.143s 00:05:16.758 sys 0m0.068s 00:05:16.758 21:30:24 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.758 21:30:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:16.758 ************************************ 00:05:16.758 END TEST event_reactor 00:05:16.758 ************************************ 00:05:16.758 21:30:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.758 21:30:24 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:16.758 21:30:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.758 21:30:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.017 ************************************ 00:05:17.017 START TEST event_reactor_perf 00:05:17.017 ************************************ 00:05:17.017 21:30:24 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:17.017 [2024-07-24 21:30:24.894445] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:17.017 [2024-07-24 21:30:24.894512] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880599 ] 00:05:17.017 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.017 [2024-07-24 21:30:24.950212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.017 [2024-07-24 21:30:25.021567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.400 test_start 00:05:18.400 test_end 00:05:18.400 Performance: 509038 events per second 00:05:18.400 00:05:18.400 real 0m1.215s 00:05:18.400 user 0m1.142s 00:05:18.400 sys 0m0.069s 00:05:18.400 21:30:26 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.400 21:30:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.400 ************************************ 00:05:18.400 END TEST event_reactor_perf 00:05:18.400 ************************************ 00:05:18.400 21:30:26 event -- event/event.sh@49 -- # uname -s 00:05:18.400 21:30:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:18.400 21:30:26 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:18.400 21:30:26 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.400 21:30:26 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.400 21:30:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.400 ************************************ 00:05:18.400 START TEST event_scheduler 00:05:18.400 ************************************ 00:05:18.400 21:30:26 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:18.400 * Looking for test storage... 00:05:18.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:18.400 21:30:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:18.400 21:30:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2880871 00:05:18.400 21:30:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.400 21:30:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:18.400 21:30:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2880871 00:05:18.400 21:30:26 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2880871 ']' 00:05:18.400 21:30:26 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.400 21:30:26 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.400 21:30:26 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.400 21:30:26 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.400 21:30:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.400 [2024-07-24 21:30:26.269317] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:18.400 [2024-07-24 21:30:26.269362] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880871 ] 00:05:18.400 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.400 [2024-07-24 21:30:26.319475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.400 [2024-07-24 21:30:26.395469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.400 [2024-07-24 21:30:26.395556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.400 [2024-07-24 21:30:26.395639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.400 [2024-07-24 21:30:26.395641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.969 21:30:27 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.969 21:30:27 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:18.969 21:30:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:18.969 21:30:27 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.969 21:30:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.969 [2024-07-24 21:30:27.078015] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:18.969 [2024-07-24 21:30:27.078033] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:18.969 [2024-07-24 21:30:27.078047] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:18.969 [2024-07-24 21:30:27.078053] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:18.969 [2024-07-24 21:30:27.078057] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:18.969 21:30:27 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.969 21:30:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:18.969 21:30:27 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.969 21:30:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.229 [2024-07-24 21:30:27.150460] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:19.229 21:30:27 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.229 21:30:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:19.229 21:30:27 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.229 21:30:27 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.229 21:30:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.229 ************************************ 00:05:19.229 START TEST scheduler_create_thread 00:05:19.229 ************************************ 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.229 2 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.229 3 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.229 4 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.229 5 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.229 6 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:19.229 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.230 7 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.230 8 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.230 9 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.230 10 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.230 21:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.134 21:30:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.134 21:30:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:21.134 21:30:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:21.134 21:30:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.134 21:30:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.702 21:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.702 00:05:21.702 real 0m2.618s 00:05:21.702 user 0m0.024s 00:05:21.702 sys 0m0.004s 00:05:21.702 21:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.702 21:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.702 ************************************ 00:05:21.702 END TEST scheduler_create_thread 00:05:21.702 ************************************ 00:05:21.961 21:30:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:21.961 21:30:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2880871 00:05:21.961 21:30:29 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2880871 ']' 00:05:21.961 21:30:29 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2880871 00:05:21.961 21:30:29 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:21.961 21:30:29 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.961 21:30:29 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2880871 00:05:21.961 21:30:29 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:21.961 21:30:29 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:21.961 21:30:29 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2880871' 00:05:21.961 killing process with pid 2880871 00:05:21.961 21:30:29 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2880871 00:05:21.961 21:30:29 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2880871 00:05:22.220 [2024-07-24 21:30:30.284572] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:22.480 00:05:22.480 real 0m4.323s 00:05:22.480 user 0m8.208s 00:05:22.480 sys 0m0.355s 00:05:22.480 21:30:30 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.480 21:30:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.480 ************************************ 00:05:22.480 END TEST event_scheduler 00:05:22.480 ************************************ 00:05:22.480 21:30:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:22.480 21:30:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:22.480 21:30:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.480 21:30:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.480 21:30:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.480 ************************************ 00:05:22.480 START TEST app_repeat 00:05:22.480 ************************************ 00:05:22.480 21:30:30 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:22.480 21:30:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.480 21:30:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.480 21:30:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:22.480 21:30:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.480 21:30:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:22.480 21:30:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:22.480 21:30:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:22.480 21:30:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2881615 00:05:22.480 21:30:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.480 21:30:30 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:22.480 21:30:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2881615' 00:05:22.480 Process app_repeat pid: 2881615 00:05:22.480 21:30:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.480 21:30:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:22.480 spdk_app_start Round 0 00:05:22.480 21:30:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2881615 /var/tmp/spdk-nbd.sock 00:05:22.480 21:30:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2881615 ']' 00:05:22.480 21:30:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.480 21:30:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.480 21:30:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.480 21:30:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.480 21:30:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.480 [2024-07-24 21:30:30.584140] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:22.480 [2024-07-24 21:30:30.584201] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881615 ] 00:05:22.741 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.741 [2024-07-24 21:30:30.640169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.741 [2024-07-24 21:30:30.719399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.741 [2024-07-24 21:30:30.719403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.310 21:30:31 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.310 21:30:31 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:23.310 21:30:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.570 Malloc0 00:05:23.570 21:30:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.829 Malloc1 00:05:23.829 21:30:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.829 21:30:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.829 /dev/nbd0 00:05:24.087 21:30:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.088 21:30:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.088 21:30:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:24.088 21:30:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:24.088 21:30:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.088 21:30:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.088 21:30:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:24.088 21:30:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:24.088 21:30:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.088 21:30:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.088 21:30:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.088 1+0 records in 00:05:24.088 1+0 records out 00:05:24.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187173 s, 21.9 MB/s 00:05:24.088 21:30:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.088 21:30:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:24.088 21:30:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.088 21:30:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.088 21:30:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:24.088 21:30:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.088 21:30:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.088 21:30:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.088 /dev/nbd1 00:05:24.088 21:30:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.088 21:30:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.088 21:30:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:24.088 21:30:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:24.088 21:30:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.088 21:30:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.088 21:30:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:24.088 21:30:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:24.088 21:30:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.088 21:30:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.088 21:30:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.088 1+0 records in 00:05:24.088 1+0 records out 00:05:24.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239449 s, 17.1 MB/s 00:05:24.088 21:30:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.088 21:30:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:24.088 21:30:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.088 21:30:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.088 21:30:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:24.088 21:30:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.088 21:30:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.088 21:30:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.088 21:30:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.088 21:30:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.347 { 00:05:24.347 "nbd_device": "/dev/nbd0", 00:05:24.347 "bdev_name": "Malloc0" 00:05:24.347 }, 00:05:24.347 { 00:05:24.347 "nbd_device": "/dev/nbd1", 00:05:24.347 "bdev_name": "Malloc1" 00:05:24.347 } 00:05:24.347 ]' 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.347 { 00:05:24.347 "nbd_device": "/dev/nbd0", 00:05:24.347 "bdev_name": "Malloc0" 00:05:24.347 }, 00:05:24.347 { 00:05:24.347 "nbd_device": "/dev/nbd1", 00:05:24.347 "bdev_name": "Malloc1" 00:05:24.347 } 00:05:24.347 ]' 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.347 /dev/nbd1' 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.347 /dev/nbd1' 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.347 256+0 records in 00:05:24.347 256+0 records out 00:05:24.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00998111 s, 105 MB/s 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.347 256+0 records in 00:05:24.347 256+0 records out 00:05:24.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135666 s, 77.3 MB/s 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.347 21:30:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.607 256+0 records in 00:05:24.607 256+0 records out 00:05:24.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149097 s, 70.3 MB/s 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.607 21:30:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.867 21:30:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.867 21:30:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.867 21:30:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.867 21:30:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.867 21:30:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.867 21:30:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.867 21:30:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.867 21:30:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.867 21:30:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.867 21:30:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.867 21:30:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.126 21:30:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.126 21:30:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.126 21:30:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.126 21:30:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.126 21:30:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.126 21:30:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.126 21:30:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.126 21:30:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.126 21:30:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.126 21:30:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.126 21:30:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.126 21:30:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.126 21:30:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.386 21:30:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.386 [2024-07-24 21:30:33.466393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.644 [2024-07-24 21:30:33.534756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.644 [2024-07-24 21:30:33.534759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.644 [2024-07-24 21:30:33.575306] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.644 [2024-07-24 21:30:33.575350] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.179 21:30:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.179 21:30:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:28.179 spdk_app_start Round 1 00:05:28.179 21:30:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2881615 /var/tmp/spdk-nbd.sock 00:05:28.180 21:30:36 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2881615 ']' 00:05:28.180 21:30:36 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.180 21:30:36 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.180 21:30:36 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.180 21:30:36 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.180 21:30:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.438 21:30:36 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.438 21:30:36 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:28.438 21:30:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.698 Malloc0 00:05:28.698 21:30:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.698 Malloc1 00:05:28.957 21:30:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.957 21:30:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.957 /dev/nbd0 00:05:28.957 21:30:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.957 21:30:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.957 21:30:37 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:28.957 21:30:37 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:28.957 21:30:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:28.957 21:30:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:28.958 21:30:37 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:28.958 21:30:37 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:28.958 21:30:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:28.958 21:30:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:28.958 21:30:37 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.958 1+0 records in 00:05:28.958 1+0 records out 00:05:28.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246186 s, 16.6 MB/s 00:05:28.958 21:30:37 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.958 21:30:37 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:28.958 21:30:37 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.958 21:30:37 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:28.958 21:30:37 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:28.958 21:30:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.958 21:30:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.958 21:30:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.217 /dev/nbd1 00:05:29.217 21:30:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.217 21:30:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.217 21:30:37 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:29.217 21:30:37 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:29.217 21:30:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:29.217 21:30:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:29.217 21:30:37 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:29.217 21:30:37 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:29.217 21:30:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:29.217 21:30:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:29.217 21:30:37 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.217 1+0 records in 00:05:29.217 1+0 records out 00:05:29.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183822 s, 22.3 MB/s 00:05:29.217 21:30:37 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.217 21:30:37 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:29.217 21:30:37 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.217 21:30:37 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:29.217 21:30:37 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:29.217 21:30:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.217 21:30:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.217 21:30:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.217 21:30:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.217 21:30:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.476 { 00:05:29.476 "nbd_device": "/dev/nbd0", 00:05:29.476 "bdev_name": "Malloc0" 00:05:29.476 }, 00:05:29.476 { 00:05:29.476 "nbd_device": "/dev/nbd1", 00:05:29.476 "bdev_name": "Malloc1" 00:05:29.476 } 00:05:29.476 ]' 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.476 { 00:05:29.476 "nbd_device": "/dev/nbd0", 00:05:29.476 "bdev_name": "Malloc0" 00:05:29.476 }, 00:05:29.476 { 00:05:29.476 "nbd_device": "/dev/nbd1", 00:05:29.476 "bdev_name": "Malloc1" 00:05:29.476 } 00:05:29.476 ]' 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.476 /dev/nbd1' 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.476 /dev/nbd1' 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.476 256+0 records in 00:05:29.476 256+0 records out 00:05:29.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103542 s, 101 MB/s 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.476 256+0 records in 00:05:29.476 256+0 records out 00:05:29.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136689 s, 76.7 MB/s 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.476 256+0 records in 00:05:29.476 256+0 records out 00:05:29.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01469 s, 71.4 MB/s 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:29.476 21:30:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.477 21:30:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.736 21:30:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.736 21:30:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.736 21:30:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.736 21:30:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.736 21:30:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.736 21:30:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.736 21:30:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.736 21:30:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.736 21:30:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.736 21:30:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.996 21:30:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.996 21:30:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.996 21:30:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.996 21:30:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.996 21:30:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.996 21:30:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.996 21:30:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.996 21:30:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.996 21:30:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.996 21:30:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.996 21:30:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.996 21:30:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.996 21:30:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.996 21:30:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.255 21:30:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.255 21:30:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.255 21:30:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.255 21:30:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:30.255 21:30:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.255 21:30:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.255 21:30:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.255 21:30:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.255 21:30:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.255 21:30:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.255 21:30:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.515 [2024-07-24 21:30:38.497402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.515 [2024-07-24 21:30:38.564087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.515 [2024-07-24 21:30:38.564090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.515 [2024-07-24 21:30:38.605267] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.515 [2024-07-24 21:30:38.605307] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.840 21:30:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.840 21:30:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:33.840 spdk_app_start Round 2 00:05:33.840 21:30:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2881615 /var/tmp/spdk-nbd.sock 00:05:33.840 21:30:41 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2881615 ']' 00:05:33.840 21:30:41 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.840 21:30:41 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.840 21:30:41 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.840 21:30:41 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.840 21:30:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.840 21:30:41 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.840 21:30:41 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:33.840 21:30:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.840 Malloc0 00:05:33.840 21:30:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.840 Malloc1 00:05:33.840 21:30:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.840 21:30:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.100 /dev/nbd0 00:05:34.100 21:30:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.100 21:30:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.100 21:30:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:34.100 21:30:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:34.100 21:30:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:34.100 21:30:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:34.100 21:30:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:34.100 21:30:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:34.100 21:30:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:34.100 21:30:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:34.100 21:30:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.100 1+0 records in 00:05:34.100 1+0 records out 00:05:34.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180147 s, 22.7 MB/s 00:05:34.100 21:30:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.100 21:30:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:34.100 21:30:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.100 21:30:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:34.100 21:30:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:34.100 21:30:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.100 21:30:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.100 21:30:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.360 /dev/nbd1 00:05:34.360 21:30:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.360 21:30:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.360 21:30:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:34.360 21:30:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:34.360 21:30:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:34.360 21:30:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:34.360 21:30:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:34.360 21:30:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:34.360 21:30:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:34.360 21:30:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:34.360 21:30:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.360 1+0 records in 00:05:34.360 1+0 records out 00:05:34.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186982 s, 21.9 MB/s 00:05:34.360 21:30:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.360 21:30:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:34.360 21:30:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.360 21:30:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:34.360 21:30:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:34.360 21:30:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.360 21:30:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.360 21:30:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.360 21:30:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.360 21:30:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.360 21:30:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.360 { 00:05:34.360 "nbd_device": "/dev/nbd0", 00:05:34.360 "bdev_name": "Malloc0" 00:05:34.360 }, 00:05:34.360 { 00:05:34.360 "nbd_device": "/dev/nbd1", 00:05:34.360 "bdev_name": "Malloc1" 00:05:34.360 } 00:05:34.360 ]' 00:05:34.360 21:30:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.360 { 00:05:34.360 "nbd_device": "/dev/nbd0", 00:05:34.360 "bdev_name": "Malloc0" 00:05:34.360 }, 00:05:34.360 { 00:05:34.360 "nbd_device": "/dev/nbd1", 00:05:34.360 "bdev_name": "Malloc1" 00:05:34.360 } 00:05:34.360 ]' 00:05:34.360 21:30:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.620 21:30:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.620 /dev/nbd1' 00:05:34.620 21:30:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.620 21:30:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.620 /dev/nbd1' 00:05:34.620 21:30:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.620 21:30:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.620 21:30:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.620 21:30:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.620 21:30:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.620 21:30:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.620 21:30:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.621 256+0 records in 00:05:34.621 256+0 records out 00:05:34.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00965035 s, 109 MB/s 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.621 256+0 records in 00:05:34.621 256+0 records out 00:05:34.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013245 s, 79.2 MB/s 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.621 256+0 records in 00:05:34.621 256+0 records out 00:05:34.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141002 s, 74.4 MB/s 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.621 21:30:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.881 21:30:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.140 21:30:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.140 21:30:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.140 21:30:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.140 21:30:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.140 21:30:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.140 21:30:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.140 21:30:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:35.140 21:30:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.140 21:30:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.140 21:30:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.140 21:30:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.140 21:30:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.140 21:30:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.409 21:30:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.410 [2024-07-24 21:30:43.519937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.668 [2024-07-24 21:30:43.588877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.669 [2024-07-24 21:30:43.588879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.669 [2024-07-24 21:30:43.629626] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.669 [2024-07-24 21:30:43.629664] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.963 21:30:46 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2881615 /var/tmp/spdk-nbd.sock 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2881615 ']' 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:38.963 21:30:46 event.app_repeat -- event/event.sh@39 -- # killprocess 2881615 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2881615 ']' 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2881615 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2881615 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2881615' 00:05:38.963 killing process with pid 2881615 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2881615 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2881615 00:05:38.963 spdk_app_start is called in Round 0. 00:05:38.963 Shutdown signal received, stop current app iteration 00:05:38.963 Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 reinitialization... 00:05:38.963 spdk_app_start is called in Round 1. 00:05:38.963 Shutdown signal received, stop current app iteration 00:05:38.963 Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 reinitialization... 00:05:38.963 spdk_app_start is called in Round 2. 00:05:38.963 Shutdown signal received, stop current app iteration 00:05:38.963 Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 reinitialization... 00:05:38.963 spdk_app_start is called in Round 3. 00:05:38.963 Shutdown signal received, stop current app iteration 00:05:38.963 21:30:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:38.963 21:30:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:38.963 00:05:38.963 real 0m16.177s 00:05:38.963 user 0m35.160s 00:05:38.963 sys 0m2.374s 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.963 21:30:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.963 ************************************ 00:05:38.963 END TEST app_repeat 00:05:38.963 ************************************ 00:05:38.963 21:30:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:38.963 21:30:46 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:38.963 21:30:46 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.963 21:30:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.963 21:30:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.963 ************************************ 00:05:38.963 START TEST cpu_locks 00:05:38.963 ************************************ 00:05:38.963 21:30:46 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:38.963 * Looking for test storage... 00:05:38.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:38.963 21:30:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:38.963 21:30:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:38.963 21:30:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:38.963 21:30:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:38.963 21:30:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.963 21:30:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.963 21:30:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.963 ************************************ 00:05:38.963 START TEST default_locks 00:05:38.963 ************************************ 00:05:38.963 21:30:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:38.963 21:30:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2884600 00:05:38.963 21:30:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2884600 00:05:38.963 21:30:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.963 21:30:46 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2884600 ']' 00:05:38.963 21:30:46 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.963 21:30:46 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.963 21:30:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.963 21:30:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.963 21:30:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.963 [2024-07-24 21:30:46.968401] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:38.963 [2024-07-24 21:30:46.968455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884600 ] 00:05:38.963 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.963 [2024-07-24 21:30:47.022310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.223 [2024-07-24 21:30:47.103287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.793 21:30:47 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.793 21:30:47 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:39.793 21:30:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2884600 00:05:39.793 21:30:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.793 21:30:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2884600 00:05:40.052 lslocks: write error 00:05:40.052 21:30:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2884600 00:05:40.052 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2884600 ']' 00:05:40.052 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2884600 00:05:40.052 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:40.052 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.052 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2884600 00:05:40.052 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.052 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.052 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2884600' 00:05:40.052 killing process with pid 2884600 00:05:40.052 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2884600 00:05:40.052 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2884600 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2884600 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2884600 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2884600 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2884600 ']' 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2884600) - No such process 00:05:40.339 ERROR: process (pid: 2884600) is no longer running 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.339 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.599 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.599 21:30:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:40.599 21:30:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:40.599 21:30:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:40.599 21:30:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:40.599 00:05:40.599 real 0m1.538s 00:05:40.599 user 0m1.604s 00:05:40.599 sys 0m0.507s 00:05:40.600 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.600 21:30:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.600 ************************************ 00:05:40.600 END TEST default_locks 00:05:40.600 ************************************ 00:05:40.600 21:30:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:40.600 21:30:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.600 21:30:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.600 21:30:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.600 ************************************ 00:05:40.600 START TEST default_locks_via_rpc 00:05:40.600 ************************************ 00:05:40.600 21:30:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:40.600 21:30:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2884867 00:05:40.600 21:30:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2884867 00:05:40.600 21:30:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.600 21:30:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2884867 ']' 00:05:40.600 21:30:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.600 21:30:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.600 21:30:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.600 21:30:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.600 21:30:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.600 [2024-07-24 21:30:48.563233] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:40.600 [2024-07-24 21:30:48.563271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884867 ] 00:05:40.600 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.600 [2024-07-24 21:30:48.615565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.600 [2024-07-24 21:30:48.694990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2884867 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2884867 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.540 21:30:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2884867 00:05:41.541 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2884867 ']' 00:05:41.541 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2884867 00:05:41.541 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:41.541 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.541 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2884867 00:05:41.541 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.541 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.541 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2884867' 00:05:41.541 killing process with pid 2884867 00:05:41.541 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2884867 00:05:41.541 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2884867 00:05:42.110 00:05:42.110 real 0m1.448s 00:05:42.110 user 0m1.523s 00:05:42.110 sys 0m0.452s 00:05:42.110 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.110 21:30:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.110 ************************************ 00:05:42.110 END TEST default_locks_via_rpc 00:05:42.110 ************************************ 00:05:42.110 21:30:49 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:42.110 21:30:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.110 21:30:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.110 21:30:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.110 ************************************ 00:05:42.110 START TEST non_locking_app_on_locked_coremask 00:05:42.110 ************************************ 00:05:42.110 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:42.110 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2885136 00:05:42.110 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2885136 /var/tmp/spdk.sock 00:05:42.110 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.110 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2885136 ']' 00:05:42.110 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.110 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.110 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.110 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.110 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.110 [2024-07-24 21:30:50.071217] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:42.110 [2024-07-24 21:30:50.071261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885136 ] 00:05:42.110 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.110 [2024-07-24 21:30:50.123293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.110 [2024-07-24 21:30:50.202924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.050 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.050 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:43.050 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:43.050 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2885355 00:05:43.050 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2885355 /var/tmp/spdk2.sock 00:05:43.050 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2885355 ']' 00:05:43.050 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.050 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.050 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.050 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.050 21:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.050 [2024-07-24 21:30:50.901212] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:43.050 [2024-07-24 21:30:50.901263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885355 ] 00:05:43.050 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.050 [2024-07-24 21:30:50.968625] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.050 [2024-07-24 21:30:50.968647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.050 [2024-07-24 21:30:51.119921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.618 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.618 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:43.618 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2885136 00:05:43.618 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2885136 00:05:43.618 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.878 lslocks: write error 00:05:43.878 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2885136 00:05:43.878 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2885136 ']' 00:05:43.878 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2885136 00:05:43.878 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:43.878 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.878 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2885136 00:05:43.878 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.878 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.878 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2885136' 00:05:43.878 killing process with pid 2885136 00:05:43.878 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2885136 00:05:43.878 21:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2885136 00:05:44.817 21:30:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2885355 00:05:44.817 21:30:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2885355 ']' 00:05:44.817 21:30:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2885355 00:05:44.817 21:30:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:44.817 21:30:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.817 21:30:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2885355 00:05:44.817 21:30:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.817 21:30:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.817 21:30:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2885355' 00:05:44.817 killing process with pid 2885355 00:05:44.817 21:30:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2885355 00:05:44.817 21:30:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2885355 00:05:45.077 00:05:45.077 real 0m2.929s 00:05:45.077 user 0m3.137s 00:05:45.077 sys 0m0.797s 00:05:45.077 21:30:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.077 21:30:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.077 ************************************ 00:05:45.077 END TEST non_locking_app_on_locked_coremask 00:05:45.077 ************************************ 00:05:45.077 21:30:52 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:45.077 21:30:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.077 21:30:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.077 21:30:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.077 ************************************ 00:05:45.077 START TEST locking_app_on_unlocked_coremask 00:05:45.077 ************************************ 00:05:45.077 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:45.077 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2885632 00:05:45.077 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2885632 /var/tmp/spdk.sock 00:05:45.077 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:45.077 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2885632 ']' 00:05:45.077 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.077 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.077 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.077 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.077 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.077 [2024-07-24 21:30:53.062378] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:45.077 [2024-07-24 21:30:53.062422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885632 ] 00:05:45.077 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.077 [2024-07-24 21:30:53.116794] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.077 [2024-07-24 21:30:53.116817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.077 [2024-07-24 21:30:53.187275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.015 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.015 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:46.015 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:46.015 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2885862 00:05:46.015 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2885862 /var/tmp/spdk2.sock 00:05:46.015 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2885862 ']' 00:05:46.015 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.015 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.015 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.015 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.015 21:30:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.015 [2024-07-24 21:30:53.907602] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:46.015 [2024-07-24 21:30:53.907650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885862 ] 00:05:46.015 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.015 [2024-07-24 21:30:53.984059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.015 [2024-07-24 21:30:54.130314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.952 21:30:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.952 21:30:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:46.952 21:30:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2885862 00:05:46.952 21:30:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2885862 00:05:46.952 21:30:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.211 lslocks: write error 00:05:47.211 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2885632 00:05:47.211 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2885632 ']' 00:05:47.211 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2885632 00:05:47.211 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:47.211 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.211 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2885632 00:05:47.211 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.211 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.211 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2885632' 00:05:47.211 killing process with pid 2885632 00:05:47.211 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2885632 00:05:47.211 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2885632 00:05:47.781 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2885862 00:05:47.781 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2885862 ']' 00:05:47.781 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2885862 00:05:47.781 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:47.781 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.781 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2885862 00:05:48.040 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.040 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.040 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2885862' 00:05:48.040 killing process with pid 2885862 00:05:48.040 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2885862 00:05:48.040 21:30:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2885862 00:05:48.300 00:05:48.300 real 0m3.194s 00:05:48.300 user 0m3.433s 00:05:48.300 sys 0m0.900s 00:05:48.300 21:30:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.300 21:30:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.300 ************************************ 00:05:48.300 END TEST locking_app_on_unlocked_coremask 00:05:48.300 ************************************ 00:05:48.300 21:30:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:48.300 21:30:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.300 21:30:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.300 21:30:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.300 ************************************ 00:05:48.300 START TEST locking_app_on_locked_coremask 00:05:48.300 ************************************ 00:05:48.300 21:30:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:48.300 21:30:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2886356 00:05:48.300 21:30:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.300 21:30:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2886356 /var/tmp/spdk.sock 00:05:48.300 21:30:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2886356 ']' 00:05:48.300 21:30:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.300 21:30:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.300 21:30:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.300 21:30:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.300 21:30:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.300 [2024-07-24 21:30:56.328249] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:48.300 [2024-07-24 21:30:56.328294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886356 ] 00:05:48.300 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.300 [2024-07-24 21:30:56.382630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.559 [2024-07-24 21:30:56.453675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2886364 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2886364 /var/tmp/spdk2.sock 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2886364 /var/tmp/spdk2.sock 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2886364 /var/tmp/spdk2.sock 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2886364 ']' 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.126 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.126 [2024-07-24 21:30:57.167468] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:49.126 [2024-07-24 21:30:57.167519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886364 ] 00:05:49.126 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.469 [2024-07-24 21:30:57.244431] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2886356 has claimed it. 00:05:49.469 [2024-07-24 21:30:57.244468] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:49.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2886364) - No such process 00:05:49.729 ERROR: process (pid: 2886364) is no longer running 00:05:49.729 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.729 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:49.729 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:49.729 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:49.729 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:49.729 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:49.729 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2886356 00:05:49.729 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2886356 00:05:49.729 21:30:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.299 lslocks: write error 00:05:50.299 21:30:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2886356 00:05:50.299 21:30:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2886356 ']' 00:05:50.299 21:30:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2886356 00:05:50.299 21:30:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:50.299 21:30:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.299 21:30:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2886356 00:05:50.299 21:30:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.299 21:30:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.299 21:30:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2886356' 00:05:50.299 killing process with pid 2886356 00:05:50.299 21:30:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2886356 00:05:50.299 21:30:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2886356 00:05:50.869 00:05:50.869 real 0m2.435s 00:05:50.869 user 0m2.693s 00:05:50.869 sys 0m0.650s 00:05:50.869 21:30:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.869 21:30:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.869 ************************************ 00:05:50.869 END TEST locking_app_on_locked_coremask 00:05:50.869 ************************************ 00:05:50.869 21:30:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:50.869 21:30:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.869 21:30:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.869 21:30:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.869 ************************************ 00:05:50.869 START TEST locking_overlapped_coremask 00:05:50.869 ************************************ 00:05:50.869 21:30:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:50.869 21:30:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2886732 00:05:50.869 21:30:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2886732 /var/tmp/spdk.sock 00:05:50.869 21:30:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:50.870 21:30:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2886732 ']' 00:05:50.870 21:30:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.870 21:30:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.870 21:30:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.870 21:30:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.870 21:30:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.870 [2024-07-24 21:30:58.832624] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:50.870 [2024-07-24 21:30:58.832666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886732 ] 00:05:50.870 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.870 [2024-07-24 21:30:58.886964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.870 [2024-07-24 21:30:58.960812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.870 [2024-07-24 21:30:58.960910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.870 [2024-07-24 21:30:58.960912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2886866 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2886866 /var/tmp/spdk2.sock 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2886866 /var/tmp/spdk2.sock 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2886866 /var/tmp/spdk2.sock 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2886866 ']' 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.809 21:30:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.809 [2024-07-24 21:30:59.676821] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:51.809 [2024-07-24 21:30:59.676870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886866 ] 00:05:51.809 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.809 [2024-07-24 21:30:59.753298] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2886732 has claimed it. 00:05:51.809 [2024-07-24 21:30:59.753338] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:52.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2886866) - No such process 00:05:52.380 ERROR: process (pid: 2886866) is no longer running 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2886732 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2886732 ']' 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2886732 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2886732 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2886732' 00:05:52.380 killing process with pid 2886732 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2886732 00:05:52.380 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2886732 00:05:52.641 00:05:52.641 real 0m1.884s 00:05:52.641 user 0m5.327s 00:05:52.641 sys 0m0.399s 00:05:52.641 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.641 21:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.641 ************************************ 00:05:52.641 END TEST locking_overlapped_coremask 00:05:52.641 ************************************ 00:05:52.641 21:31:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:52.641 21:31:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.641 21:31:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.641 21:31:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.641 ************************************ 00:05:52.641 START TEST locking_overlapped_coremask_via_rpc 00:05:52.641 ************************************ 00:05:52.641 21:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:52.641 21:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2887118 00:05:52.641 21:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2887118 /var/tmp/spdk.sock 00:05:52.641 21:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:52.641 21:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2887118 ']' 00:05:52.641 21:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.641 21:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.641 21:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.641 21:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.641 21:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.901 [2024-07-24 21:31:00.785306] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:52.901 [2024-07-24 21:31:00.785353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887118 ] 00:05:52.901 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.901 [2024-07-24 21:31:00.839784] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.901 [2024-07-24 21:31:00.839809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.901 [2024-07-24 21:31:00.911012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.901 [2024-07-24 21:31:00.911129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.901 [2024-07-24 21:31:00.911132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.841 21:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.841 21:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:53.841 21:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2887287 00:05:53.841 21:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:53.841 21:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2887287 /var/tmp/spdk2.sock 00:05:53.841 21:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2887287 ']' 00:05:53.841 21:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.841 21:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.841 21:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.841 21:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.841 21:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.841 [2024-07-24 21:31:01.640843] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:53.841 [2024-07-24 21:31:01.640888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887287 ] 00:05:53.841 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.841 [2024-07-24 21:31:01.716330] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.841 [2024-07-24 21:31:01.716358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.841 [2024-07-24 21:31:01.868027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.841 [2024-07-24 21:31:01.869934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:53.841 [2024-07-24 21:31:01.871049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.412 [2024-07-24 21:31:02.463114] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2887118 has claimed it. 00:05:54.412 request: 00:05:54.412 { 00:05:54.412 "method": "framework_enable_cpumask_locks", 00:05:54.412 "req_id": 1 00:05:54.412 } 00:05:54.412 Got JSON-RPC error response 00:05:54.412 response: 00:05:54.412 { 00:05:54.412 "code": -32603, 00:05:54.412 "message": "Failed to claim CPU core: 2" 00:05:54.412 } 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2887118 /var/tmp/spdk.sock 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2887118 ']' 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.412 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.672 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.672 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:54.672 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2887287 /var/tmp/spdk2.sock 00:05:54.672 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2887287 ']' 00:05:54.672 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.672 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.672 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.672 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.672 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.933 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.933 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:54.933 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:54.933 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:54.933 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:54.933 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:54.933 00:05:54.933 real 0m2.129s 00:05:54.933 user 0m0.888s 00:05:54.933 sys 0m0.174s 00:05:54.933 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.933 21:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.933 ************************************ 00:05:54.933 END TEST locking_overlapped_coremask_via_rpc 00:05:54.933 ************************************ 00:05:54.933 21:31:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:54.933 21:31:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2887118 ]] 00:05:54.933 21:31:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2887118 00:05:54.933 21:31:02 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2887118 ']' 00:05:54.933 21:31:02 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2887118 00:05:54.933 21:31:02 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:54.933 21:31:02 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.933 21:31:02 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2887118 00:05:54.933 21:31:02 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.933 21:31:02 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.933 21:31:02 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2887118' 00:05:54.933 killing process with pid 2887118 00:05:54.933 21:31:02 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2887118 00:05:54.933 21:31:02 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2887118 00:05:55.193 21:31:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2887287 ]] 00:05:55.193 21:31:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2887287 00:05:55.193 21:31:03 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2887287 ']' 00:05:55.193 21:31:03 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2887287 00:05:55.193 21:31:03 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:55.193 21:31:03 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.193 21:31:03 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2887287 00:05:55.193 21:31:03 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:55.193 21:31:03 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:55.452 21:31:03 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2887287' 00:05:55.452 killing process with pid 2887287 00:05:55.452 21:31:03 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2887287 00:05:55.452 21:31:03 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2887287 00:05:55.712 21:31:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:55.712 21:31:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:55.712 21:31:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2887118 ]] 00:05:55.712 21:31:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2887118 00:05:55.712 21:31:03 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2887118 ']' 00:05:55.712 21:31:03 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2887118 00:05:55.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2887118) - No such process 00:05:55.712 21:31:03 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2887118 is not found' 00:05:55.712 Process with pid 2887118 is not found 00:05:55.712 21:31:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2887287 ]] 00:05:55.712 21:31:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2887287 00:05:55.712 21:31:03 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2887287 ']' 00:05:55.712 21:31:03 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2887287 00:05:55.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2887287) - No such process 00:05:55.712 21:31:03 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2887287 is not found' 00:05:55.712 Process with pid 2887287 is not found 00:05:55.712 21:31:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:55.712 00:05:55.712 real 0m16.835s 00:05:55.712 user 0m29.265s 00:05:55.712 sys 0m4.777s 00:05:55.712 21:31:03 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.712 21:31:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.712 ************************************ 00:05:55.712 END TEST cpu_locks 00:05:55.712 ************************************ 00:05:55.712 00:05:55.712 real 0m41.449s 00:05:55.712 user 1m19.235s 00:05:55.712 sys 0m8.036s 00:05:55.712 21:31:03 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.712 21:31:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.712 ************************************ 00:05:55.712 END TEST event 00:05:55.712 ************************************ 00:05:55.712 21:31:03 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:55.712 21:31:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.712 21:31:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.712 21:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:55.712 ************************************ 00:05:55.712 START TEST thread 00:05:55.712 ************************************ 00:05:55.712 21:31:03 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:55.712 * Looking for test storage... 00:05:55.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:55.712 21:31:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:55.712 21:31:03 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:55.712 21:31:03 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.712 21:31:03 thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.972 ************************************ 00:05:55.972 START TEST thread_poller_perf 00:05:55.972 ************************************ 00:05:55.972 21:31:03 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:55.972 [2024-07-24 21:31:03.869824] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:55.972 [2024-07-24 21:31:03.869903] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887704 ] 00:05:55.972 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.972 [2024-07-24 21:31:03.928183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.972 [2024-07-24 21:31:04.002481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.972 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:57.349 ====================================== 00:05:57.349 busy:2305127984 (cyc) 00:05:57.349 total_run_count: 409000 00:05:57.349 tsc_hz: 2300000000 (cyc) 00:05:57.349 ====================================== 00:05:57.349 poller_cost: 5636 (cyc), 2450 (nsec) 00:05:57.349 00:05:57.349 real 0m1.227s 00:05:57.349 user 0m1.147s 00:05:57.349 sys 0m0.075s 00:05:57.349 21:31:05 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.349 21:31:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.349 ************************************ 00:05:57.349 END TEST thread_poller_perf 00:05:57.349 ************************************ 00:05:57.349 21:31:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:57.349 21:31:05 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:57.349 21:31:05 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.349 21:31:05 thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.349 ************************************ 00:05:57.349 START TEST thread_poller_perf 00:05:57.349 ************************************ 00:05:57.349 21:31:05 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:57.349 [2024-07-24 21:31:05.167800] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:57.349 [2024-07-24 21:31:05.167861] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887953 ] 00:05:57.349 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.349 [2024-07-24 21:31:05.226471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.350 [2024-07-24 21:31:05.298859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.350 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:58.286 ====================================== 00:05:58.286 busy:2301565106 (cyc) 00:05:58.286 total_run_count: 5286000 00:05:58.286 tsc_hz: 2300000000 (cyc) 00:05:58.286 ====================================== 00:05:58.286 poller_cost: 435 (cyc), 189 (nsec) 00:05:58.286 00:05:58.286 real 0m1.221s 00:05:58.286 user 0m1.148s 00:05:58.286 sys 0m0.068s 00:05:58.286 21:31:06 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.286 21:31:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.286 ************************************ 00:05:58.286 END TEST thread_poller_perf 00:05:58.286 ************************************ 00:05:58.286 21:31:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:58.286 00:05:58.286 real 0m2.670s 00:05:58.286 user 0m2.397s 00:05:58.286 sys 0m0.280s 00:05:58.546 21:31:06 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.546 21:31:06 thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.546 ************************************ 00:05:58.546 END TEST thread 00:05:58.546 ************************************ 00:05:58.546 21:31:06 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:58.546 21:31:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.546 21:31:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.546 21:31:06 -- common/autotest_common.sh@10 -- # set +x 00:05:58.546 ************************************ 00:05:58.546 START TEST accel 00:05:58.546 ************************************ 00:05:58.546 21:31:06 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:58.546 * Looking for test storage... 00:05:58.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:58.546 21:31:06 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:58.546 21:31:06 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:58.546 21:31:06 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:58.546 21:31:06 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2888244 00:05:58.546 21:31:06 accel -- accel/accel.sh@63 -- # waitforlisten 2888244 00:05:58.546 21:31:06 accel -- common/autotest_common.sh@829 -- # '[' -z 2888244 ']' 00:05:58.546 21:31:06 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.546 21:31:06 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.546 21:31:06 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.546 21:31:06 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.546 21:31:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.546 21:31:06 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:58.546 21:31:06 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:58.546 21:31:06 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.546 21:31:06 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.546 21:31:06 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.546 21:31:06 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.546 21:31:06 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.546 21:31:06 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:58.546 21:31:06 accel -- accel/accel.sh@41 -- # jq -r . 00:05:58.546 [2024-07-24 21:31:06.595418] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:58.546 [2024-07-24 21:31:06.595463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888244 ] 00:05:58.546 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.546 [2024-07-24 21:31:06.650557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.805 [2024-07-24 21:31:06.732751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.373 21:31:07 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.373 21:31:07 accel -- common/autotest_common.sh@862 -- # return 0 00:05:59.373 21:31:07 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:59.373 21:31:07 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:59.373 21:31:07 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:59.373 21:31:07 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:59.373 21:31:07 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:59.373 21:31:07 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:59.373 21:31:07 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.373 21:31:07 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:59.373 21:31:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.373 21:31:07 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.373 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.373 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.373 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.373 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.373 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.373 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.373 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.373 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.373 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.373 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.373 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.373 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.373 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.373 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.373 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.373 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.373 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.373 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.373 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.373 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.373 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.373 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.373 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.374 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.374 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.374 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.374 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.374 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.374 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.374 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.374 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.374 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.374 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.374 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.374 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.374 21:31:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.374 21:31:07 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.374 21:31:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.374 21:31:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.374 21:31:07 accel -- accel/accel.sh@75 -- # killprocess 2888244 00:05:59.374 21:31:07 accel -- common/autotest_common.sh@948 -- # '[' -z 2888244 ']' 00:05:59.374 21:31:07 accel -- common/autotest_common.sh@952 -- # kill -0 2888244 00:05:59.374 21:31:07 accel -- common/autotest_common.sh@953 -- # uname 00:05:59.374 21:31:07 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.374 21:31:07 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2888244 00:05:59.632 21:31:07 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.632 21:31:07 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.632 21:31:07 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2888244' 00:05:59.632 killing process with pid 2888244 00:05:59.632 21:31:07 accel -- common/autotest_common.sh@967 -- # kill 2888244 00:05:59.632 21:31:07 accel -- common/autotest_common.sh@972 -- # wait 2888244 00:05:59.891 21:31:07 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:59.891 21:31:07 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:59.891 21:31:07 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:59.891 21:31:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.891 21:31:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.891 21:31:07 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:59.891 21:31:07 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:59.891 21:31:07 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:59.891 21:31:07 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.891 21:31:07 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.891 21:31:07 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.891 21:31:07 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.891 21:31:07 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.891 21:31:07 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:59.891 21:31:07 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:59.891 21:31:07 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.891 21:31:07 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:59.891 21:31:07 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:59.891 21:31:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:59.891 21:31:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.891 21:31:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.891 ************************************ 00:05:59.891 START TEST accel_missing_filename 00:05:59.891 ************************************ 00:05:59.891 21:31:07 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:59.891 21:31:07 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:59.891 21:31:07 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:59.891 21:31:07 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:59.891 21:31:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.891 21:31:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:59.891 21:31:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.891 21:31:07 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:59.891 21:31:07 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:59.891 21:31:07 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:59.891 21:31:07 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.891 21:31:07 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.891 21:31:07 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.891 21:31:07 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.891 21:31:07 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.891 21:31:07 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:59.891 21:31:07 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:59.891 [2024-07-24 21:31:07.962599] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:05:59.891 [2024-07-24 21:31:07.962652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888513 ] 00:05:59.891 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.150 [2024-07-24 21:31:08.017322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.150 [2024-07-24 21:31:08.092113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.150 [2024-07-24 21:31:08.133037] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.150 [2024-07-24 21:31:08.192963] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:00.150 A filename is required. 00:06:00.150 21:31:08 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:00.150 21:31:08 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.150 21:31:08 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:00.150 21:31:08 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:00.150 21:31:08 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:00.150 21:31:08 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.150 00:06:00.150 real 0m0.328s 00:06:00.150 user 0m0.251s 00:06:00.150 sys 0m0.113s 00:06:00.150 21:31:08 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.150 21:31:08 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:00.150 ************************************ 00:06:00.150 END TEST accel_missing_filename 00:06:00.150 ************************************ 00:06:00.409 21:31:08 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:00.409 21:31:08 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:00.409 21:31:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.409 21:31:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.409 ************************************ 00:06:00.409 START TEST accel_compress_verify 00:06:00.409 ************************************ 00:06:00.409 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:00.409 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:00.409 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:00.409 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:00.409 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.409 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:00.409 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.409 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:00.409 21:31:08 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:00.409 21:31:08 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:00.409 21:31:08 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.409 21:31:08 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.409 21:31:08 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.409 21:31:08 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.409 21:31:08 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.409 21:31:08 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:00.409 21:31:08 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:00.409 [2024-07-24 21:31:08.355140] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:00.409 [2024-07-24 21:31:08.355217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888632 ] 00:06:00.409 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.409 [2024-07-24 21:31:08.410487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.409 [2024-07-24 21:31:08.488323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.668 [2024-07-24 21:31:08.530003] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.668 [2024-07-24 21:31:08.589957] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:00.668 00:06:00.668 Compression does not support the verify option, aborting. 00:06:00.668 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:00.668 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.668 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:00.668 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:00.668 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:00.668 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.668 00:06:00.668 real 0m0.333s 00:06:00.668 user 0m0.240s 00:06:00.668 sys 0m0.118s 00:06:00.668 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.669 21:31:08 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:00.669 ************************************ 00:06:00.669 END TEST accel_compress_verify 00:06:00.669 ************************************ 00:06:00.669 21:31:08 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:00.669 21:31:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:00.669 21:31:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.669 21:31:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.669 ************************************ 00:06:00.669 START TEST accel_wrong_workload 00:06:00.669 ************************************ 00:06:00.669 21:31:08 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:00.669 21:31:08 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:00.669 21:31:08 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:00.669 21:31:08 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:00.669 21:31:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.669 21:31:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:00.669 21:31:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.669 21:31:08 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:00.669 21:31:08 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:00.669 21:31:08 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:00.669 21:31:08 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.669 21:31:08 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.669 21:31:08 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.669 21:31:08 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.669 21:31:08 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.669 21:31:08 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:00.669 21:31:08 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:00.669 Unsupported workload type: foobar 00:06:00.669 [2024-07-24 21:31:08.748160] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:00.669 accel_perf options: 00:06:00.669 [-h help message] 00:06:00.669 [-q queue depth per core] 00:06:00.669 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:00.669 [-T number of threads per core 00:06:00.669 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:00.669 [-t time in seconds] 00:06:00.669 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:00.669 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:00.669 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:00.669 [-l for compress/decompress workloads, name of uncompressed input file 00:06:00.669 [-S for crc32c workload, use this seed value (default 0) 00:06:00.669 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:00.669 [-f for fill workload, use this BYTE value (default 255) 00:06:00.669 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:00.669 [-y verify result if this switch is on] 00:06:00.669 [-a tasks to allocate per core (default: same value as -q)] 00:06:00.669 Can be used to spread operations across a wider range of memory. 00:06:00.669 21:31:08 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:00.669 21:31:08 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.669 21:31:08 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.669 21:31:08 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.669 00:06:00.669 real 0m0.029s 00:06:00.669 user 0m0.016s 00:06:00.669 sys 0m0.013s 00:06:00.669 21:31:08 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.669 21:31:08 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:00.669 ************************************ 00:06:00.669 END TEST accel_wrong_workload 00:06:00.669 ************************************ 00:06:00.669 Error: writing output failed: Broken pipe 00:06:00.669 21:31:08 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:00.669 21:31:08 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:00.669 21:31:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.669 21:31:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.929 ************************************ 00:06:00.929 START TEST accel_negative_buffers 00:06:00.929 ************************************ 00:06:00.929 21:31:08 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:00.929 21:31:08 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:00.929 21:31:08 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:00.929 21:31:08 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:00.929 21:31:08 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.929 21:31:08 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:00.929 21:31:08 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.929 21:31:08 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:00.929 21:31:08 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:00.929 21:31:08 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:00.929 21:31:08 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.929 21:31:08 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.929 21:31:08 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.929 21:31:08 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.929 21:31:08 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.929 21:31:08 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:00.929 21:31:08 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:00.929 -x option must be non-negative. 00:06:00.929 [2024-07-24 21:31:08.822051] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:00.929 accel_perf options: 00:06:00.929 [-h help message] 00:06:00.929 [-q queue depth per core] 00:06:00.929 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:00.929 [-T number of threads per core 00:06:00.929 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:00.929 [-t time in seconds] 00:06:00.929 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:00.929 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:00.929 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:00.929 [-l for compress/decompress workloads, name of uncompressed input file 00:06:00.929 [-S for crc32c workload, use this seed value (default 0) 00:06:00.929 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:00.929 [-f for fill workload, use this BYTE value (default 255) 00:06:00.929 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:00.929 [-y verify result if this switch is on] 00:06:00.929 [-a tasks to allocate per core (default: same value as -q)] 00:06:00.929 Can be used to spread operations across a wider range of memory. 00:06:00.929 21:31:08 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:00.929 21:31:08 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.929 21:31:08 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.929 21:31:08 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.929 00:06:00.929 real 0m0.030s 00:06:00.929 user 0m0.017s 00:06:00.929 sys 0m0.012s 00:06:00.929 21:31:08 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.929 21:31:08 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:00.929 ************************************ 00:06:00.929 END TEST accel_negative_buffers 00:06:00.929 ************************************ 00:06:00.929 Error: writing output failed: Broken pipe 00:06:00.929 21:31:08 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:00.929 21:31:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:00.929 21:31:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.929 21:31:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.929 ************************************ 00:06:00.929 START TEST accel_crc32c 00:06:00.929 ************************************ 00:06:00.929 21:31:08 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:00.929 21:31:08 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:00.929 21:31:08 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:00.929 21:31:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.929 21:31:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.929 21:31:08 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:00.929 21:31:08 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:00.929 21:31:08 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:00.929 21:31:08 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.929 21:31:08 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.929 21:31:08 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.929 21:31:08 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.929 21:31:08 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.929 21:31:08 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:00.929 21:31:08 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:00.929 [2024-07-24 21:31:08.921229] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:00.929 [2024-07-24 21:31:08.921296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888818 ] 00:06:00.929 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.929 [2024-07-24 21:31:08.978953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.188 [2024-07-24 21:31:09.057455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.188 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:01.188 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.188 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.188 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.188 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:01.188 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.188 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.188 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.188 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:01.188 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.188 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:01.189 21:31:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.121 21:31:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.121 21:31:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.121 21:31:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.121 21:31:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.121 21:31:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.121 21:31:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:02.122 21:31:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.122 00:06:02.122 real 0m1.342s 00:06:02.122 user 0m1.234s 00:06:02.122 sys 0m0.114s 00:06:02.122 21:31:10 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.122 21:31:10 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:02.122 ************************************ 00:06:02.122 END TEST accel_crc32c 00:06:02.122 ************************************ 00:06:02.380 21:31:10 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:02.380 21:31:10 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:02.380 21:31:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.380 21:31:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.380 ************************************ 00:06:02.380 START TEST accel_crc32c_C2 00:06:02.380 ************************************ 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.380 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:02.380 [2024-07-24 21:31:10.332337] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:02.380 [2024-07-24 21:31:10.332390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889065 ] 00:06:02.380 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.380 [2024-07-24 21:31:10.390348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.380 [2024-07-24 21:31:10.464857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.638 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.639 21:31:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.629 00:06:03.629 real 0m1.336s 00:06:03.629 user 0m1.220s 00:06:03.629 sys 0m0.121s 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.629 21:31:11 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:03.629 ************************************ 00:06:03.629 END TEST accel_crc32c_C2 00:06:03.629 ************************************ 00:06:03.629 21:31:11 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:03.629 21:31:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:03.629 21:31:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.629 21:31:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.629 ************************************ 00:06:03.629 START TEST accel_copy 00:06:03.629 ************************************ 00:06:03.629 21:31:11 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:03.629 21:31:11 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:03.629 21:31:11 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:03.629 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.629 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.629 21:31:11 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:03.629 21:31:11 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:03.629 21:31:11 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:03.629 21:31:11 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.629 21:31:11 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.629 21:31:11 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.629 21:31:11 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.629 21:31:11 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.629 21:31:11 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:03.629 21:31:11 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:03.629 [2024-07-24 21:31:11.725532] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:03.629 [2024-07-24 21:31:11.725585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889317 ] 00:06:03.889 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.889 [2024-07-24 21:31:11.781027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.889 [2024-07-24 21:31:11.853262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.889 21:31:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:05.293 21:31:13 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.293 00:06:05.293 real 0m1.329s 00:06:05.293 user 0m1.225s 00:06:05.293 sys 0m0.108s 00:06:05.293 21:31:13 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.293 21:31:13 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:05.293 ************************************ 00:06:05.293 END TEST accel_copy 00:06:05.293 ************************************ 00:06:05.293 21:31:13 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.293 21:31:13 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:05.293 21:31:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.293 21:31:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.293 ************************************ 00:06:05.293 START TEST accel_fill 00:06:05.293 ************************************ 00:06:05.293 21:31:13 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.293 21:31:13 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:05.293 21:31:13 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:05.293 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.293 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.293 21:31:13 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.293 21:31:13 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.293 21:31:13 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:05.293 21:31:13 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.293 21:31:13 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.293 21:31:13 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.293 21:31:13 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.293 21:31:13 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.293 21:31:13 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:05.294 [2024-07-24 21:31:13.110488] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:05.294 [2024-07-24 21:31:13.110535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889566 ] 00:06:05.294 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.294 [2024-07-24 21:31:13.163890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.294 [2024-07-24 21:31:13.236312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 21:31:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:06.668 21:31:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.668 00:06:06.668 real 0m1.327s 00:06:06.668 user 0m1.221s 00:06:06.668 sys 0m0.110s 00:06:06.668 21:31:14 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.668 21:31:14 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:06.668 ************************************ 00:06:06.668 END TEST accel_fill 00:06:06.668 ************************************ 00:06:06.668 21:31:14 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:06.668 21:31:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:06.668 21:31:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.668 21:31:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.668 ************************************ 00:06:06.668 START TEST accel_copy_crc32c 00:06:06.668 ************************************ 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:06.668 [2024-07-24 21:31:14.494657] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:06.668 [2024-07-24 21:31:14.494708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889820 ] 00:06:06.668 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.668 [2024-07-24 21:31:14.548013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.668 [2024-07-24 21:31:14.619489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.668 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.669 21:31:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.045 00:06:08.045 real 0m1.325s 00:06:08.045 user 0m1.219s 00:06:08.045 sys 0m0.112s 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.045 21:31:15 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:08.045 ************************************ 00:06:08.045 END TEST accel_copy_crc32c 00:06:08.045 ************************************ 00:06:08.045 21:31:15 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:08.045 21:31:15 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:08.045 21:31:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.045 21:31:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.045 ************************************ 00:06:08.045 START TEST accel_copy_crc32c_C2 00:06:08.045 ************************************ 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:08.045 21:31:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:08.045 [2024-07-24 21:31:15.876567] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:08.045 [2024-07-24 21:31:15.876632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890069 ] 00:06:08.045 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.045 [2024-07-24 21:31:15.932270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.045 [2024-07-24 21:31:16.004986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.045 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.046 21:31:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.424 00:06:09.424 real 0m1.329s 00:06:09.424 user 0m1.225s 00:06:09.424 sys 0m0.109s 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.424 21:31:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:09.424 ************************************ 00:06:09.424 END TEST accel_copy_crc32c_C2 00:06:09.424 ************************************ 00:06:09.424 21:31:17 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:09.424 21:31:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:09.424 21:31:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.424 21:31:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.424 ************************************ 00:06:09.424 START TEST accel_dualcast 00:06:09.424 ************************************ 00:06:09.424 21:31:17 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:09.424 [2024-07-24 21:31:17.263158] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:09.424 [2024-07-24 21:31:17.263223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890317 ] 00:06:09.424 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.424 [2024-07-24 21:31:17.318130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.424 [2024-07-24 21:31:17.389824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:09.424 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.425 21:31:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:10.803 21:31:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.803 00:06:10.803 real 0m1.331s 00:06:10.803 user 0m1.225s 00:06:10.803 sys 0m0.110s 00:06:10.803 21:31:18 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.803 21:31:18 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:10.803 ************************************ 00:06:10.803 END TEST accel_dualcast 00:06:10.803 ************************************ 00:06:10.803 21:31:18 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:10.803 21:31:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:10.803 21:31:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.803 21:31:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.803 ************************************ 00:06:10.803 START TEST accel_compare 00:06:10.803 ************************************ 00:06:10.803 21:31:18 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:10.803 21:31:18 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:10.803 21:31:18 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:10.803 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.803 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.803 21:31:18 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:10.803 21:31:18 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:10.803 21:31:18 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:10.803 21:31:18 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.803 21:31:18 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:10.804 [2024-07-24 21:31:18.655525] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:10.804 [2024-07-24 21:31:18.655585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890571 ] 00:06:10.804 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.804 [2024-07-24 21:31:18.711522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.804 [2024-07-24 21:31:18.783436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.804 21:31:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:12.183 21:31:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.183 00:06:12.183 real 0m1.330s 00:06:12.183 user 0m1.220s 00:06:12.183 sys 0m0.114s 00:06:12.183 21:31:19 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.183 21:31:19 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:12.183 ************************************ 00:06:12.183 END TEST accel_compare 00:06:12.183 ************************************ 00:06:12.183 21:31:19 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:12.183 21:31:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:12.183 21:31:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.183 21:31:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.183 ************************************ 00:06:12.184 START TEST accel_xor 00:06:12.184 ************************************ 00:06:12.184 21:31:20 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:12.184 [2024-07-24 21:31:20.044024] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:12.184 [2024-07-24 21:31:20.044087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890818 ] 00:06:12.184 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.184 [2024-07-24 21:31:20.100124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.184 [2024-07-24 21:31:20.172036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.184 21:31:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.563 00:06:13.563 real 0m1.331s 00:06:13.563 user 0m1.224s 00:06:13.563 sys 0m0.110s 00:06:13.563 21:31:21 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.563 21:31:21 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:13.563 ************************************ 00:06:13.563 END TEST accel_xor 00:06:13.563 ************************************ 00:06:13.563 21:31:21 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:13.563 21:31:21 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:13.563 21:31:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.563 21:31:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.563 ************************************ 00:06:13.563 START TEST accel_xor 00:06:13.563 ************************************ 00:06:13.563 21:31:21 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:13.563 [2024-07-24 21:31:21.430901] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:13.563 [2024-07-24 21:31:21.430967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891063 ] 00:06:13.563 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.563 [2024-07-24 21:31:21.485913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.563 [2024-07-24 21:31:21.557842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.563 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.564 21:31:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:14.942 21:31:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.942 00:06:14.942 real 0m1.330s 00:06:14.942 user 0m1.223s 00:06:14.942 sys 0m0.109s 00:06:14.942 21:31:22 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.942 21:31:22 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:14.942 ************************************ 00:06:14.942 END TEST accel_xor 00:06:14.942 ************************************ 00:06:14.942 21:31:22 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:14.942 21:31:22 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:14.942 21:31:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.942 21:31:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.942 ************************************ 00:06:14.942 START TEST accel_dif_verify 00:06:14.942 ************************************ 00:06:14.942 21:31:22 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:14.942 [2024-07-24 21:31:22.813182] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:14.942 [2024-07-24 21:31:22.813231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891316 ] 00:06:14.942 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.942 [2024-07-24 21:31:22.866279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.942 [2024-07-24 21:31:22.938757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.942 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.943 21:31:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:16.319 21:31:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.319 00:06:16.319 real 0m1.320s 00:06:16.319 user 0m1.212s 00:06:16.319 sys 0m0.111s 00:06:16.319 21:31:24 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.319 21:31:24 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:16.319 ************************************ 00:06:16.319 END TEST accel_dif_verify 00:06:16.319 ************************************ 00:06:16.319 21:31:24 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:16.319 21:31:24 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:16.319 21:31:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.319 21:31:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.319 ************************************ 00:06:16.319 START TEST accel_dif_generate 00:06:16.319 ************************************ 00:06:16.319 21:31:24 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:16.319 [2024-07-24 21:31:24.195132] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:16.319 [2024-07-24 21:31:24.195190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891564 ] 00:06:16.319 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.319 [2024-07-24 21:31:24.248284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.319 [2024-07-24 21:31:24.320457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.319 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.320 21:31:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:17.695 21:31:25 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.695 00:06:17.695 real 0m1.321s 00:06:17.695 user 0m1.216s 00:06:17.695 sys 0m0.107s 00:06:17.695 21:31:25 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.695 21:31:25 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:17.695 ************************************ 00:06:17.695 END TEST accel_dif_generate 00:06:17.695 ************************************ 00:06:17.696 21:31:25 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:17.696 21:31:25 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:17.696 21:31:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.696 21:31:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.696 ************************************ 00:06:17.696 START TEST accel_dif_generate_copy 00:06:17.696 ************************************ 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:17.696 [2024-07-24 21:31:25.593615] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:17.696 [2024-07-24 21:31:25.593687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891811 ] 00:06:17.696 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.696 [2024-07-24 21:31:25.651581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.696 [2024-07-24 21:31:25.723914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.696 21:31:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.072 00:06:19.072 real 0m1.334s 00:06:19.072 user 0m1.221s 00:06:19.072 sys 0m0.115s 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.072 21:31:26 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:19.072 ************************************ 00:06:19.072 END TEST accel_dif_generate_copy 00:06:19.072 ************************************ 00:06:19.072 21:31:26 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:19.072 21:31:26 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:19.072 21:31:26 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:19.072 21:31:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.072 21:31:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.072 ************************************ 00:06:19.072 START TEST accel_comp 00:06:19.072 ************************************ 00:06:19.072 21:31:26 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:19.072 21:31:26 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:19.072 21:31:26 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:19.072 21:31:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.072 21:31:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.072 21:31:26 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:19.072 21:31:26 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:19.072 21:31:26 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:19.072 21:31:26 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.072 21:31:26 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.072 21:31:26 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.072 21:31:26 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.073 21:31:26 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.073 21:31:26 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:19.073 21:31:26 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:19.073 [2024-07-24 21:31:26.985000] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:19.073 [2024-07-24 21:31:26.985052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892066 ] 00:06:19.073 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.073 [2024-07-24 21:31:27.038790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.073 [2024-07-24 21:31:27.110250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:19.073 21:31:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:20.449 21:31:28 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.449 00:06:20.449 real 0m1.326s 00:06:20.449 user 0m1.222s 00:06:20.449 sys 0m0.107s 00:06:20.449 21:31:28 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.449 21:31:28 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:20.449 ************************************ 00:06:20.449 END TEST accel_comp 00:06:20.449 ************************************ 00:06:20.449 21:31:28 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:20.449 21:31:28 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:20.449 21:31:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.449 21:31:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.449 ************************************ 00:06:20.449 START TEST accel_decomp 00:06:20.449 ************************************ 00:06:20.449 21:31:28 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:20.449 [2024-07-24 21:31:28.368631] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:20.449 [2024-07-24 21:31:28.368696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892316 ] 00:06:20.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.449 [2024-07-24 21:31:28.423245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.449 [2024-07-24 21:31:28.494807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.449 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.450 21:31:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:21.831 21:31:29 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.831 00:06:21.831 real 0m1.328s 00:06:21.831 user 0m1.212s 00:06:21.831 sys 0m0.118s 00:06:21.831 21:31:29 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.831 21:31:29 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:21.831 ************************************ 00:06:21.831 END TEST accel_decomp 00:06:21.831 ************************************ 00:06:21.831 21:31:29 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:21.831 21:31:29 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:21.831 21:31:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.831 21:31:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.831 ************************************ 00:06:21.831 START TEST accel_decomp_full 00:06:21.831 ************************************ 00:06:21.832 21:31:29 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:21.832 [2024-07-24 21:31:29.754849] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:21.832 [2024-07-24 21:31:29.754914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892568 ] 00:06:21.832 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.832 [2024-07-24 21:31:29.809082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.832 [2024-07-24 21:31:29.881292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.832 21:31:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:23.288 21:31:31 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.288 00:06:23.288 real 0m1.338s 00:06:23.288 user 0m1.234s 00:06:23.288 sys 0m0.105s 00:06:23.288 21:31:31 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.288 21:31:31 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:23.288 ************************************ 00:06:23.288 END TEST accel_decomp_full 00:06:23.288 ************************************ 00:06:23.288 21:31:31 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:23.288 21:31:31 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:23.288 21:31:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.288 21:31:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.288 ************************************ 00:06:23.288 START TEST accel_decomp_mcore 00:06:23.288 ************************************ 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:23.288 [2024-07-24 21:31:31.150660] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:23.288 [2024-07-24 21:31:31.150707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892817 ] 00:06:23.288 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.288 [2024-07-24 21:31:31.204541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.288 [2024-07-24 21:31:31.278907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.288 [2024-07-24 21:31:31.279005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.288 [2024-07-24 21:31:31.279084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.288 [2024-07-24 21:31:31.279087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.288 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.289 21:31:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.671 00:06:24.671 real 0m1.345s 00:06:24.671 user 0m4.570s 00:06:24.671 sys 0m0.118s 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.671 21:31:32 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:24.671 ************************************ 00:06:24.671 END TEST accel_decomp_mcore 00:06:24.671 ************************************ 00:06:24.671 21:31:32 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:24.671 21:31:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:24.671 21:31:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.671 21:31:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.671 ************************************ 00:06:24.671 START TEST accel_decomp_full_mcore 00:06:24.671 ************************************ 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:24.671 [2024-07-24 21:31:32.564757] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:24.671 [2024-07-24 21:31:32.564805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893065 ] 00:06:24.671 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.671 [2024-07-24 21:31:32.620150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.671 [2024-07-24 21:31:32.695697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.671 [2024-07-24 21:31:32.695786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.671 [2024-07-24 21:31:32.695873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.671 [2024-07-24 21:31:32.695875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.671 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.672 21:31:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.052 00:06:26.052 real 0m1.358s 00:06:26.052 user 0m4.607s 00:06:26.052 sys 0m0.116s 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.052 21:31:33 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:26.052 ************************************ 00:06:26.052 END TEST accel_decomp_full_mcore 00:06:26.052 ************************************ 00:06:26.052 21:31:33 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:26.052 21:31:33 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:26.052 21:31:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.052 21:31:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.052 ************************************ 00:06:26.052 START TEST accel_decomp_mthread 00:06:26.052 ************************************ 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:26.052 21:31:33 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:26.052 [2024-07-24 21:31:33.989570] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:26.052 [2024-07-24 21:31:33.989637] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893323 ] 00:06:26.052 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.052 [2024-07-24 21:31:34.044473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.052 [2024-07-24 21:31:34.117039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.052 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.053 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.053 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.053 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.053 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.053 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.053 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.053 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:26.053 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.053 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:26.053 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.053 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.311 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.311 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.311 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.311 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.311 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.311 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.311 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.311 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.311 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:26.311 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.311 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:26.311 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.311 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.311 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.312 21:31:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.251 00:06:27.251 real 0m1.341s 00:06:27.251 user 0m1.235s 00:06:27.251 sys 0m0.119s 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.251 21:31:35 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:27.251 ************************************ 00:06:27.251 END TEST accel_decomp_mthread 00:06:27.251 ************************************ 00:06:27.251 21:31:35 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:27.251 21:31:35 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:27.251 21:31:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.251 21:31:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.512 ************************************ 00:06:27.512 START TEST accel_decomp_full_mthread 00:06:27.512 ************************************ 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:27.512 [2024-07-24 21:31:35.396145] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:27.512 [2024-07-24 21:31:35.396195] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893569 ] 00:06:27.512 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.512 [2024-07-24 21:31:35.450557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.512 [2024-07-24 21:31:35.522801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:27.512 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.513 21:31:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.894 00:06:28.894 real 0m1.362s 00:06:28.894 user 0m1.260s 00:06:28.894 sys 0m0.114s 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.894 21:31:36 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:28.894 ************************************ 00:06:28.894 END TEST accel_decomp_full_mthread 00:06:28.894 ************************************ 00:06:28.894 21:31:36 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:28.894 21:31:36 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:28.894 21:31:36 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:28.894 21:31:36 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:28.894 21:31:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.894 21:31:36 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.894 21:31:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.894 21:31:36 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.894 21:31:36 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.894 21:31:36 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.894 21:31:36 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.894 21:31:36 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:28.894 21:31:36 accel -- accel/accel.sh@41 -- # jq -r . 00:06:28.894 ************************************ 00:06:28.894 START TEST accel_dif_functional_tests 00:06:28.894 ************************************ 00:06:28.894 21:31:36 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:28.894 [2024-07-24 21:31:36.844145] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:28.894 [2024-07-24 21:31:36.844181] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893815 ] 00:06:28.894 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.894 [2024-07-24 21:31:36.894298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.894 [2024-07-24 21:31:36.968328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.894 [2024-07-24 21:31:36.968425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.894 [2024-07-24 21:31:36.968425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.155 00:06:29.155 00:06:29.155 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.155 http://cunit.sourceforge.net/ 00:06:29.155 00:06:29.155 00:06:29.155 Suite: accel_dif 00:06:29.155 Test: verify: DIF generated, GUARD check ...passed 00:06:29.155 Test: verify: DIF generated, APPTAG check ...passed 00:06:29.155 Test: verify: DIF generated, REFTAG check ...passed 00:06:29.155 Test: verify: DIF not generated, GUARD check ...[2024-07-24 21:31:37.035523] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:29.155 passed 00:06:29.155 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 21:31:37.035569] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:29.155 passed 00:06:29.155 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 21:31:37.035602] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:29.155 passed 00:06:29.155 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:29.155 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 21:31:37.035645] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:29.155 passed 00:06:29.155 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:29.155 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:29.155 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:29.155 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 21:31:37.035744] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:29.155 passed 00:06:29.155 Test: verify copy: DIF generated, GUARD check ...passed 00:06:29.155 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:29.155 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:29.155 Test: verify copy: DIF not generated, GUARD check ...[2024-07-24 21:31:37.035855] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:29.155 passed 00:06:29.155 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-24 21:31:37.035876] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:29.155 passed 00:06:29.155 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 21:31:37.035895] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:29.155 passed 00:06:29.155 Test: generate copy: DIF generated, GUARD check ...passed 00:06:29.155 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:29.155 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:29.155 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:29.155 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:29.155 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:29.155 Test: generate copy: iovecs-len validate ...[2024-07-24 21:31:37.036060] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:29.155 passed 00:06:29.155 Test: generate copy: buffer alignment validate ...passed 00:06:29.155 00:06:29.155 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.155 suites 1 1 n/a 0 0 00:06:29.155 tests 26 26 26 0 0 00:06:29.155 asserts 115 115 115 0 n/a 00:06:29.155 00:06:29.155 Elapsed time = 0.002 seconds 00:06:29.155 00:06:29.155 real 0m0.403s 00:06:29.155 user 0m0.604s 00:06:29.155 sys 0m0.145s 00:06:29.155 21:31:37 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.155 21:31:37 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:29.155 ************************************ 00:06:29.155 END TEST accel_dif_functional_tests 00:06:29.155 ************************************ 00:06:29.155 00:06:29.155 real 0m30.770s 00:06:29.155 user 0m34.468s 00:06:29.155 sys 0m4.149s 00:06:29.155 21:31:37 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.155 21:31:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.155 ************************************ 00:06:29.155 END TEST accel 00:06:29.155 ************************************ 00:06:29.415 21:31:37 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:29.415 21:31:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.415 21:31:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.415 21:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:29.415 ************************************ 00:06:29.415 START TEST accel_rpc 00:06:29.415 ************************************ 00:06:29.415 21:31:37 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:29.415 * Looking for test storage... 00:06:29.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:29.415 21:31:37 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:29.415 21:31:37 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2894031 00:06:29.415 21:31:37 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2894031 00:06:29.415 21:31:37 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:29.415 21:31:37 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2894031 ']' 00:06:29.415 21:31:37 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.415 21:31:37 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.415 21:31:37 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.415 21:31:37 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.415 21:31:37 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.415 [2024-07-24 21:31:37.438649] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:29.415 [2024-07-24 21:31:37.438703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894031 ] 00:06:29.415 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.415 [2024-07-24 21:31:37.491660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.675 [2024-07-24 21:31:37.566009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.246 21:31:38 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.246 21:31:38 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:30.246 21:31:38 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:30.246 21:31:38 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:30.246 21:31:38 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:30.246 21:31:38 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:30.246 21:31:38 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:30.246 21:31:38 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.246 21:31:38 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.246 21:31:38 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.246 ************************************ 00:06:30.246 START TEST accel_assign_opcode 00:06:30.246 ************************************ 00:06:30.246 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:30.246 21:31:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:30.246 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.246 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:30.246 [2024-07-24 21:31:38.284130] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:30.246 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.246 21:31:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:30.246 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.246 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:30.246 [2024-07-24 21:31:38.292146] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:30.246 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.246 21:31:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:30.246 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.246 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:30.506 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.506 21:31:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:30.506 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.506 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:30.506 21:31:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:30.506 21:31:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:30.506 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.506 software 00:06:30.506 00:06:30.506 real 0m0.233s 00:06:30.506 user 0m0.049s 00:06:30.506 sys 0m0.009s 00:06:30.506 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.506 21:31:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:30.506 ************************************ 00:06:30.506 END TEST accel_assign_opcode 00:06:30.506 ************************************ 00:06:30.506 21:31:38 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2894031 00:06:30.506 21:31:38 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2894031 ']' 00:06:30.506 21:31:38 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2894031 00:06:30.506 21:31:38 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:30.506 21:31:38 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.506 21:31:38 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2894031 00:06:30.506 21:31:38 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.506 21:31:38 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.506 21:31:38 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2894031' 00:06:30.506 killing process with pid 2894031 00:06:30.506 21:31:38 accel_rpc -- common/autotest_common.sh@967 -- # kill 2894031 00:06:30.506 21:31:38 accel_rpc -- common/autotest_common.sh@972 -- # wait 2894031 00:06:31.077 00:06:31.077 real 0m1.587s 00:06:31.077 user 0m1.698s 00:06:31.077 sys 0m0.391s 00:06:31.077 21:31:38 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.077 21:31:38 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.077 ************************************ 00:06:31.077 END TEST accel_rpc 00:06:31.077 ************************************ 00:06:31.077 21:31:38 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:31.077 21:31:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.077 21:31:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.077 21:31:38 -- common/autotest_common.sh@10 -- # set +x 00:06:31.077 ************************************ 00:06:31.077 START TEST app_cmdline 00:06:31.077 ************************************ 00:06:31.077 21:31:38 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:31.077 * Looking for test storage... 00:06:31.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:31.077 21:31:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:31.077 21:31:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2894412 00:06:31.077 21:31:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2894412 00:06:31.077 21:31:39 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:31.077 21:31:39 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2894412 ']' 00:06:31.077 21:31:39 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.077 21:31:39 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.077 21:31:39 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.077 21:31:39 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.077 21:31:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.077 [2024-07-24 21:31:39.101826] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:31.077 [2024-07-24 21:31:39.101875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894412 ] 00:06:31.077 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.077 [2024-07-24 21:31:39.155935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.337 [2024-07-24 21:31:39.235046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.906 21:31:39 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.906 21:31:39 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:31.906 21:31:39 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:32.166 { 00:06:32.166 "version": "SPDK v24.09-pre git sha1 6b560eac9", 00:06:32.166 "fields": { 00:06:32.166 "major": 24, 00:06:32.166 "minor": 9, 00:06:32.166 "patch": 0, 00:06:32.166 "suffix": "-pre", 00:06:32.166 "commit": "6b560eac9" 00:06:32.166 } 00:06:32.166 } 00:06:32.166 21:31:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:32.166 21:31:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:32.166 21:31:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:32.166 21:31:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:32.166 21:31:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:32.166 21:31:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:32.166 21:31:40 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.166 21:31:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:32.166 21:31:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:32.166 21:31:40 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.166 21:31:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:32.166 21:31:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:32.167 21:31:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.167 21:31:40 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:32.167 21:31:40 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.167 21:31:40 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:32.167 21:31:40 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.167 21:31:40 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:32.167 21:31:40 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.167 21:31:40 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:32.167 21:31:40 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.167 21:31:40 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:32.167 21:31:40 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:32.167 21:31:40 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.427 request: 00:06:32.427 { 00:06:32.427 "method": "env_dpdk_get_mem_stats", 00:06:32.427 "req_id": 1 00:06:32.427 } 00:06:32.427 Got JSON-RPC error response 00:06:32.427 response: 00:06:32.427 { 00:06:32.427 "code": -32601, 00:06:32.427 "message": "Method not found" 00:06:32.427 } 00:06:32.427 21:31:40 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:32.427 21:31:40 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.427 21:31:40 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:32.427 21:31:40 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.427 21:31:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2894412 00:06:32.427 21:31:40 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2894412 ']' 00:06:32.427 21:31:40 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2894412 00:06:32.427 21:31:40 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:32.427 21:31:40 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.427 21:31:40 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2894412 00:06:32.427 21:31:40 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.427 21:31:40 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.427 21:31:40 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2894412' 00:06:32.427 killing process with pid 2894412 00:06:32.427 21:31:40 app_cmdline -- common/autotest_common.sh@967 -- # kill 2894412 00:06:32.427 21:31:40 app_cmdline -- common/autotest_common.sh@972 -- # wait 2894412 00:06:32.687 00:06:32.687 real 0m1.686s 00:06:32.687 user 0m2.033s 00:06:32.687 sys 0m0.417s 00:06:32.687 21:31:40 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.687 21:31:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:32.687 ************************************ 00:06:32.687 END TEST app_cmdline 00:06:32.687 ************************************ 00:06:32.687 21:31:40 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:32.687 21:31:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.687 21:31:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.687 21:31:40 -- common/autotest_common.sh@10 -- # set +x 00:06:32.687 ************************************ 00:06:32.687 START TEST version 00:06:32.687 ************************************ 00:06:32.687 21:31:40 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:32.687 * Looking for test storage... 00:06:32.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:32.687 21:31:40 version -- app/version.sh@17 -- # get_header_version major 00:06:32.687 21:31:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.687 21:31:40 version -- app/version.sh@14 -- # cut -f2 00:06:32.687 21:31:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.687 21:31:40 version -- app/version.sh@17 -- # major=24 00:06:32.687 21:31:40 version -- app/version.sh@18 -- # get_header_version minor 00:06:32.687 21:31:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.687 21:31:40 version -- app/version.sh@14 -- # cut -f2 00:06:32.687 21:31:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.687 21:31:40 version -- app/version.sh@18 -- # minor=9 00:06:32.687 21:31:40 version -- app/version.sh@19 -- # get_header_version patch 00:06:32.687 21:31:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.687 21:31:40 version -- app/version.sh@14 -- # cut -f2 00:06:32.946 21:31:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.946 21:31:40 version -- app/version.sh@19 -- # patch=0 00:06:32.946 21:31:40 version -- app/version.sh@20 -- # get_header_version suffix 00:06:32.946 21:31:40 version -- app/version.sh@14 -- # cut -f2 00:06:32.946 21:31:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.946 21:31:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.946 21:31:40 version -- app/version.sh@20 -- # suffix=-pre 00:06:32.946 21:31:40 version -- app/version.sh@22 -- # version=24.9 00:06:32.947 21:31:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:32.947 21:31:40 version -- app/version.sh@28 -- # version=24.9rc0 00:06:32.947 21:31:40 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:32.947 21:31:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:32.947 21:31:40 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:32.947 21:31:40 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:32.947 00:06:32.947 real 0m0.147s 00:06:32.947 user 0m0.078s 00:06:32.947 sys 0m0.101s 00:06:32.947 21:31:40 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.947 21:31:40 version -- common/autotest_common.sh@10 -- # set +x 00:06:32.947 ************************************ 00:06:32.947 END TEST version 00:06:32.947 ************************************ 00:06:32.947 21:31:40 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:32.947 21:31:40 -- spdk/autotest.sh@198 -- # uname -s 00:06:32.947 21:31:40 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:32.947 21:31:40 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:32.947 21:31:40 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:32.947 21:31:40 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:32.947 21:31:40 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:32.947 21:31:40 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:32.947 21:31:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:32.947 21:31:40 -- common/autotest_common.sh@10 -- # set +x 00:06:32.947 21:31:40 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:32.947 21:31:40 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:32.947 21:31:40 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:32.947 21:31:40 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:32.947 21:31:40 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:32.947 21:31:40 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:32.947 21:31:40 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:32.947 21:31:40 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:32.947 21:31:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.947 21:31:40 -- common/autotest_common.sh@10 -- # set +x 00:06:32.947 ************************************ 00:06:32.947 START TEST nvmf_tcp 00:06:32.947 ************************************ 00:06:32.947 21:31:40 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:32.947 * Looking for test storage... 00:06:32.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:32.947 21:31:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:32.947 21:31:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:32.947 21:31:41 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:32.947 21:31:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:32.947 21:31:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.947 21:31:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.207 ************************************ 00:06:33.207 START TEST nvmf_target_core 00:06:33.207 ************************************ 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:33.207 * Looking for test storage... 00:06:33.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.207 21:31:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:33.208 ************************************ 00:06:33.208 START TEST nvmf_abort 00:06:33.208 ************************************ 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:33.208 * Looking for test storage... 00:06:33.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:33.208 21:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.485 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:38.485 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:38.485 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:38.486 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:38.486 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:38.486 Found net devices under 0000:86:00.0: cvl_0_0 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:38.486 Found net devices under 0000:86:00.1: cvl_0_1 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:38.486 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:38.746 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:38.746 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:38.746 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:38.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:06:38.746 00:06:38.746 --- 10.0.0.2 ping statistics --- 00:06:38.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.746 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:06:38.746 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:38.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:06:38.746 00:06:38.746 --- 10.0.0.1 ping statistics --- 00:06:38.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.746 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:06:38.746 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.746 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:38.746 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:38.746 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.746 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:38.746 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2897836 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2897836 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2897836 ']' 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.747 21:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.747 [2024-07-24 21:31:46.723319] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:38.747 [2024-07-24 21:31:46.723360] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.747 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.747 [2024-07-24 21:31:46.780087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.747 [2024-07-24 21:31:46.861230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:38.747 [2024-07-24 21:31:46.861266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:38.747 [2024-07-24 21:31:46.861273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:38.747 [2024-07-24 21:31:46.861279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:38.747 [2024-07-24 21:31:46.861284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:38.747 [2024-07-24 21:31:46.861387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.747 [2024-07-24 21:31:46.861487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.747 [2024-07-24 21:31:46.861489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.684 [2024-07-24 21:31:47.586126] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.684 Malloc0 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.684 Delay0 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.684 [2024-07-24 21:31:47.670709] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.684 21:31:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:39.684 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.943 [2024-07-24 21:31:47.821289] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:41.883 Initializing NVMe Controllers 00:06:41.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:41.883 controller IO queue size 128 less than required 00:06:41.883 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:41.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:41.883 Initialization complete. Launching workers. 00:06:41.883 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 114, failed: 41682 00:06:41.883 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41734, failed to submit 62 00:06:41.883 success 41686, unsuccess 48, failed 0 00:06:41.883 21:31:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:41.883 21:31:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.883 21:31:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:41.883 21:31:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.883 21:31:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:41.883 21:31:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:41.883 21:31:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:41.883 21:31:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:41.883 21:31:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:41.883 21:31:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:41.883 21:31:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:41.883 21:31:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:41.883 rmmod nvme_tcp 00:06:41.883 rmmod nvme_fabrics 00:06:41.883 rmmod nvme_keyring 00:06:41.883 21:31:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:42.143 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:42.143 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:42.143 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2897836 ']' 00:06:42.143 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2897836 00:06:42.143 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2897836 ']' 00:06:42.143 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2897836 00:06:42.143 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:06:42.143 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.143 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2897836 00:06:42.143 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:06:42.143 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:06:42.143 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2897836' 00:06:42.143 killing process with pid 2897836 00:06:42.143 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2897836 00:06:42.143 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2897836 00:06:42.403 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:42.403 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:42.403 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:42.403 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:42.403 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:42.403 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.403 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.403 21:31:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.312 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:44.312 00:06:44.312 real 0m11.135s 00:06:44.312 user 0m13.403s 00:06:44.312 sys 0m4.995s 00:06:44.312 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.312 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:44.312 ************************************ 00:06:44.312 END TEST nvmf_abort 00:06:44.312 ************************************ 00:06:44.312 21:31:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:44.312 21:31:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:44.312 21:31:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.312 21:31:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.312 ************************************ 00:06:44.312 START TEST nvmf_ns_hotplug_stress 00:06:44.312 ************************************ 00:06:44.312 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:44.572 * Looking for test storage... 00:06:44.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.572 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:44.573 21:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:49.854 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:49.854 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:49.854 Found net devices under 0000:86:00.0: cvl_0_0 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.854 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:49.855 Found net devices under 0000:86:00.1: cvl_0_1 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:49.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:06:49.855 00:06:49.855 --- 10.0.0.2 ping statistics --- 00:06:49.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.855 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:49.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:06:49.855 00:06:49.855 --- 10.0.0.1 ping statistics --- 00:06:49.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.855 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2901849 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2901849 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2901849 ']' 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:49.855 21:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:49.855 [2024-07-24 21:31:57.870037] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:06:49.855 [2024-07-24 21:31:57.870088] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.855 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.855 [2024-07-24 21:31:57.926516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.115 [2024-07-24 21:31:58.006483] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.115 [2024-07-24 21:31:58.006517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.115 [2024-07-24 21:31:58.006525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.115 [2024-07-24 21:31:58.006533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.115 [2024-07-24 21:31:58.006538] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.115 [2024-07-24 21:31:58.006575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.115 [2024-07-24 21:31:58.006594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.115 [2024-07-24 21:31:58.006596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.685 21:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.685 21:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:06:50.685 21:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:50.685 21:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:50.685 21:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:50.685 21:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.685 21:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:50.685 21:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:50.944 [2024-07-24 21:31:58.863606] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.944 21:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:51.204 21:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.204 [2024-07-24 21:31:59.236897] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.204 21:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:51.464 21:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:51.723 Malloc0 00:06:51.723 21:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:51.723 Delay0 00:06:51.723 21:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.982 21:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:52.241 NULL1 00:06:52.241 21:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:52.500 21:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2902340 00:06:52.500 21:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:52.500 21:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:06:52.500 21:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.500 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.500 21:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.759 21:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:52.759 21:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:53.018 true 00:06:53.018 21:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:06:53.018 21:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.018 21:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.277 21:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:53.277 21:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:53.536 true 00:06:53.536 21:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:06:53.536 21:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.915 Read completed with error (sct=0, sc=11) 00:06:54.915 21:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.915 21:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:54.915 21:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:54.915 true 00:06:54.915 21:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:06:54.915 21:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.853 21:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.112 21:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:56.112 21:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:56.112 true 00:06:56.112 21:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:06:56.112 21:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.372 21:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.632 21:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:56.632 21:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:56.632 true 00:06:56.891 21:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:06:56.891 21:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.891 21:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.151 21:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:57.151 21:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:57.410 true 00:06:57.410 21:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:06:57.410 21:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.350 21:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.350 21:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:58.350 21:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:58.609 true 00:06:58.609 21:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:06:58.609 21:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.609 21:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.869 21:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:58.869 21:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:59.128 true 00:06:59.128 21:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:06:59.128 21:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.128 21:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.401 [2024-07-24 21:32:07.416795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.416875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.416914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.416950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.416987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.417973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.418994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.419036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.419084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.419127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.419171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.419213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.419254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.419295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.419339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.419385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.419436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.419480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.419529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.420032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.420082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.420128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.420172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.420218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.420256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.420296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.420326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.420366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.420401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.420442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.420483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.420520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.401 [2024-07-24 21:32:07.420559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.420599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.420637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.420679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.420728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.420772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.420814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.420855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.420897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.420942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.420982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.421994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.422597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:59.402 [2024-07-24 21:32:07.423108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.423980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.402 [2024-07-24 21:32:07.424878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.424922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.424958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.424993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.425881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.426965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.427983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.428849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.429391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.429433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.429470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.429512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.429555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.429600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.429646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.429690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.429731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.429776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.429817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.403 [2024-07-24 21:32:07.429860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.429900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.429944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.429991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.430992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.431988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.432035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.432079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.432583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.432628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.432670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.432726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.432770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.432814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.432857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.432901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.432946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.432989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.433985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.434035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.434082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.434120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.434152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.434195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.434233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.434269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.404 [2024-07-24 21:32:07.434305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.434995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.435045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.435092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.435136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.435177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.435687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.435732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.435771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.435808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.435853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.435893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.435934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.435973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.436986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.437978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.438021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.438069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.438110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.405 [2024-07-24 21:32:07.438149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.438185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.438217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.438253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.438425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.438889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.438937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.438984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.439953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.440984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.441602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.442990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.443035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.443089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.443132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.443179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.443227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.443271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.443314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.406 [2024-07-24 21:32:07.443355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.443982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.444796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 21:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:59.407 21:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:59.407 [2024-07-24 21:32:07.445317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.445995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.446963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.407 [2024-07-24 21:32:07.447852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.447889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.447930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.447972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.448016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.448522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.448571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.448617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.448659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.448704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.448754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.448797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.448845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.448884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.448923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.448953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.448993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.449989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.450995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.451031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.451081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.451125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.451165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.451209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.451248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.451286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.451765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.451800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.451837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.451875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.451913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.451953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.451995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.408 [2024-07-24 21:32:07.452908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.452954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.452997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.453967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.454009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.454072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.454118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.454161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.454212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.454248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.454294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.454334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.454375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.454418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.454926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.454973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.455966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.456989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.457023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.457068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.457109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.409 [2024-07-24 21:32:07.457146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.457183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.457222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.457743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.457792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.457834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.457880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.457931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.457974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.458971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.459995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.460047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.460091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.460137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.460180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.460226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.460272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.460316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.460354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.460389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.460424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.460905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.460954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.460992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.410 [2024-07-24 21:32:07.461622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.461657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.461700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.461735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.461763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.461793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.461833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.461873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.461910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.461947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.461982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.462994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.463038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.463094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.463135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.463182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.463227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.463267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.463313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.463352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.463389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.463429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.463467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.463510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.463552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.464956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.465998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.466038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.466087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.466127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.411 [2024-07-24 21:32:07.466163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.466204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.466244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.466291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.466336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.466377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.466423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.466473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.466516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.466558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.466600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.466638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.466670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.466713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.466751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.467977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.468970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.469999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.470050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.470096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.470567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.470612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.470657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.470705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.470745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.470783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.470822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.470860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.470896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.470936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.470975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.471013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.471062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.471104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.471141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.471176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.471218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:59.412 [2024-07-24 21:32:07.471259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.471306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.412 [2024-07-24 21:32:07.471351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.471395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.471439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.471486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.471531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.471577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.471631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.471676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.471727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.471776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.471823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.471868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.471919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.471967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.472987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.473024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.473073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.473114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.473152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.473192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.473232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.473271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.473749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.473794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.473838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.473877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.473914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.473962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.474978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.413 [2024-07-24 21:32:07.475972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.476018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.476067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.476114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.476156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.476201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.476244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.476284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.476327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.476378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.476423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.476943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.476990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.477994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.478971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.479604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.480122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.480165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.480210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.480256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.480303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.480347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.480394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.480440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.480481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.480520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.414 [2024-07-24 21:32:07.480550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.480594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.480634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.480672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.480706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.480746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.480788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.480828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.480867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.480904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.480941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.480988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.481975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.482744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.483989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.484958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.485001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.415 [2024-07-24 21:32:07.485041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.485982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.486029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.486218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.486565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.486607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.486646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.486686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.486725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.486764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.486799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.486836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.486873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.486916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.486958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.486997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.487956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.488963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.489973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.490020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.490071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.416 [2024-07-24 21:32:07.490123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.490959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.491987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.492020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.492067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.492106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.492149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.492188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.492227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.492266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.492310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.492350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.492547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.492901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.492952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.492998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.493985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.417 [2024-07-24 21:32:07.494708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.494741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.494770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.494799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.494828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.494856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.494884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.494913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.494942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.494970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.495000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.495030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.495061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.495091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.495120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.495148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.495187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.495224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.495751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.495800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.495842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.495887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.495940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.495985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.496997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.497975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.498020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.498070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.498126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.498172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.498219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.498266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.498322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.498370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.498415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.498462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.498508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.498552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.498596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.498763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.418 [2024-07-24 21:32:07.499688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.499726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.499767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.499809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.499848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.499896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.499940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.499987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.500970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.501735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.502964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.419 [2024-07-24 21:32:07.503931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.420 [2024-07-24 21:32:07.503971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.420 [2024-07-24 21:32:07.504010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.420 [2024-07-24 21:32:07.504051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.420 [2024-07-24 21:32:07.504092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.504959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.505006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.505051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.505235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.505590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.505639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.505686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.505737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.505781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.505825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.505869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.505909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.505952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.505994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.506027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.835 [2024-07-24 21:32:07.506071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.506957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.507971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.508017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.508060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.508093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.508135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.508175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.508215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.508258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.508747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.508789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.508828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.508869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.508907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.508949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.508992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.509032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.509079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.509117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.836 [2024-07-24 21:32:07.509154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.509985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.510961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.511000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.511032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.511073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.511116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.511154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.511203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.511247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.511287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.511324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.511367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.511405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.511447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.511501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.511708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.512088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.512139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.512187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.512235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.512281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.512324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.512369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.512418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.837 [2024-07-24 21:32:07.512464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.512512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.512561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.512609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.512650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.512695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.512734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.512777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.512831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.512881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.512922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.512958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.512995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.513966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.514798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.515972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.516009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.838 [2024-07-24 21:32:07.516053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.516962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.517994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.518485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.518536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.518566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.518601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.518640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.518682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.518726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.518767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.518803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.518845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.518886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.518926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.518969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.519009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.839 [2024-07-24 21:32:07.519049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.519998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.520992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.521034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.521077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.521593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.521646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.521690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.521736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.521781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.521830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:59.840 [2024-07-24 21:32:07.521874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.840 [2024-07-24 21:32:07.521920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.521973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.522957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.523988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.524032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.524078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.524123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.524164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.524199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.524235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.524274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.524314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.524358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.524400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.524439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.524918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.524960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.524999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.525038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.525087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.525120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.525158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.525192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.525230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.525265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.525308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.525346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.525386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.525422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.841 [2024-07-24 21:32:07.525452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.842 [2024-07-24 21:32:07.525490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.842 [2024-07-24 21:32:07.525526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.842 [2024-07-24 21:32:07.525562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.842 [2024-07-24 21:32:07.525590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.842 [2024-07-24 21:32:07.525617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.842 [2024-07-24 21:32:07.525645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.842 [2024-07-24 21:32:07.525679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.842 [2024-07-24 21:32:07.525713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.842 [2024-07-24 21:32:07.525750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.842 [2024-07-24 21:32:07.525786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.525817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.525845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.525883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.525911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.525938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.525967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.526007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.526035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.526066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.526094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.526122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.526157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.526197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.526237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.844 [2024-07-24 21:32:07.526270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.848 [2024-07-24 21:32:07.526300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.848 [2024-07-24 21:32:07.526342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.848 [2024-07-24 21:32:07.526372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.526968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.527015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.527064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.527110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.527155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.527652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.527700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.527744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.527783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.527820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.527858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.527888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.527930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.527970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.528999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.529977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.530021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.530073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.530118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.530163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.530209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.530259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.530302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.530350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.530399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.530441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.530489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.530974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.531996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.532037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.532079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.532120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.532157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.532193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.532237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.532281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.532323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.532362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.849 [2024-07-24 21:32:07.532402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.532985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.533973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.534951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.535958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.536010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.536058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.536104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.536161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.536202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.536248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.536297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.536340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.536822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.536858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.536896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.536936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.536975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.537971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.538012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.538061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.538108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.538159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.538204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.538250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.850 [2024-07-24 21:32:07.538305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.538992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.539038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.539087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.539133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.539185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.539233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.539280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.539321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.539361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.539402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.539441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.539481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.539524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.539571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.539603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.540955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.541985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.542713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.543975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.851 [2024-07-24 21:32:07.544820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.544860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.544899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.544941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.544980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.545932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.546442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.546490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.546536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.546580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.546626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.546672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.546713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.546756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.546809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.546853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.546896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.852 [2024-07-24 21:32:07.546945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.546990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.547972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.548982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.549026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.549075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.549120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.549171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.549655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.549698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.549737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.549772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.549810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.549848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.549885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.549927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.549971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.550983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.551958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.853 [2024-07-24 21:32:07.552001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.552054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.552100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.552143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.552194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.552241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.552755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.552811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.552857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.552904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.552955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.552995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.553964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.554958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.555002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.555047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.555089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.555132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.555175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.555212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.555247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.555286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.555324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.555364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.555402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.555441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.555482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.555980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.556992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.557997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.558048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.558095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.558140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.558192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.558238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.558282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.558337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.558380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.558424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.558465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.558508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.558553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.558598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.558640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.559988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.560035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.560090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.560138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.560176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.854 [2024-07-24 21:32:07.560219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.560993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.561913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.562440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.562484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.562531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.562568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.562610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.562651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.562701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.562748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.562794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.562842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.562886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.562932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.562976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.563999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.564961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.565002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.565055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.565097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.565597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.565649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.565699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.565744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.565792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.565836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.565883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.565927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.565970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.566961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.567980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.568026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.568077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.568126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.568169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.568211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.568262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.568305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.568803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.855 [2024-07-24 21:32:07.568844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.568883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.568920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.568958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.568995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.569978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.570994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.571037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.571088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.571134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.571178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.571226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.571271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.571322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.571366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.571896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.571951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.571995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:59.856 [2024-07-24 21:32:07.572143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.572971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.573988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.574594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.575976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.576974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.856 [2024-07-24 21:32:07.577654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.577697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.577740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.577791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.577835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.578987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.579973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.580976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.581017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.581062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.581531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.581573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.581612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.581647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.581686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.581732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.581771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.581804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.581845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.581885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.581926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.581966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.582979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.583977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.584021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.584061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.584099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.584136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.584173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.584218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.584806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.584853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.584892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.584933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.584980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.585977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.586985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.587024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.587070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.857 [2024-07-24 21:32:07.587106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.587149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.587194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.587242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.587285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.587333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.587379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.587423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.587470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.587523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.587571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.588988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.589995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.590776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.591966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.592981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.593856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.594363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.594412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.594458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.594512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.594557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.594599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.594647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.594696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.594742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.594788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.594833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.594879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.594921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.594970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.858 [2024-07-24 21:32:07.595777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.595817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.595856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.595900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.595937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.595972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.596966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.597003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.597598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.597658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.597702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.597742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.597797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.597841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.597886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.597932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.597976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.598999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.599959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.600007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.600057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.600101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.600146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.600188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.600229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.600268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.600305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.600808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.600857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.600897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.600937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.600978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.601968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.602961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.603000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.603037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.603082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.603126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.603172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.603221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.603254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.603290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.603333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.603378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.603422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.603469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.603507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.603547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 true 00:06:59.859 [2024-07-24 21:32:07.604092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.604996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.605047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.859 [2024-07-24 21:32:07.605081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.605980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.606799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.607968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.608973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.609961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.610006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.610513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.610553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.610597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.610639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.610683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.610723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.610761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.610799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.610842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.610884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.610924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.610954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.610997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.611997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.860 [2024-07-24 21:32:07.612932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.612976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.613023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.613075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.613117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.613166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.613214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.613707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.613744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.613783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.613820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.613856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.613894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.613935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.613974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.614977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.615943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.616000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.616046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.616091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.616135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.616190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 21:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:06:59.861 [2024-07-24 21:32:07.616240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.616283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.616340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.616389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.616435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 21:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.861 [2024-07-24 21:32:07.616928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.616971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.617970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.618974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.619595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.620992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.621975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.622021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.622066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.622105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.622148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.622179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.622220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.622260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.622298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.622344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.622392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.622434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.861 [2024-07-24 21:32:07.622478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.622525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.622565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.622603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.622643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.622675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.622717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.622754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.622794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.622831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.622874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.622918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:59.862 [2024-07-24 21:32:07.623497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.623548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.623592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.623636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.623687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.623727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.623772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.623822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.623870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.623914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.623964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.624977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.625996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.626039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.626089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.626147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.626189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.626233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.626720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.626762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.626800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.626843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.626882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.626921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.626970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.627992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.628985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.629023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.629071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.629114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.629144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.629182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.629222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.629261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.629300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.629341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.629879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.629931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.629978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.630969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.862 [2024-07-24 21:32:07.631898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.631933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.631976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.632013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.632056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.632103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.632138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.632177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.632217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.632257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.632302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.632344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.632388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.632426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.632461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.632510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.632559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.633983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.634965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.635805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.636974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.637990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.638966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.639013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.639062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.639544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.639586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.639627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.639666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.639703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.639742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.639785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.639830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.639862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.639904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.639944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.639977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.863 [2024-07-24 21:32:07.640614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.640661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.640708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.640751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.640801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.640844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.640893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.640937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.640983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.641996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.642035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.642081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.642121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.642644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.642692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.642737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.642788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.642834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.642883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.642928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.642977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.643956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.644986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.645018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.645064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.645103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.645144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.645188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.645228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.645274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.645317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.645364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.645409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.645905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.645952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.645995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.646978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.647984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.648032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.648084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.648130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.648179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.648225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.648273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.648315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.648357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.648401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.648447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.648486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.648529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.649996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.650051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.650098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.864 [2024-07-24 21:32:07.650146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.650958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.651897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.652420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.652463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.652499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.652537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.652582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.652622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.652665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.652714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.652757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.652801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.652847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.652898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.652941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.652988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.653995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.654973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.655019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.655072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.655583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.655631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.655676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.655727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.655772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.655819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.655871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.655920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.655960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.655998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.656984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.657974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.658024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.658074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.658122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.658168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.658212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.658256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.658303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.865 [2024-07-24 21:32:07.658791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.658838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.658884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.658927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.658966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.659960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.660998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.661037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.661085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.661122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.661160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.661202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.661235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.661278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.661314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.661358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.661395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.661971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.662975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.663982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.664706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.665985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.666994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.866 [2024-07-24 21:32:07.667748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.667794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.667839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.667884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.668979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.669997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.670997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.671051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.671520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.671567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.671612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.671651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.671695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.671735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.671772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.671814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.671858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.671890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.671925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.671961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:59.867 [2024-07-24 21:32:07.672402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.672951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.673970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.674006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.674049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.674090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.674131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.674176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.674683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.674728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.674773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.674822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.674866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.674909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.674953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.674996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.675997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.867 [2024-07-24 21:32:07.676843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.676889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.676935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.676975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.677005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.677054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.677097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.677137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.677175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.677214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.677254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.677292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.677327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.677359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.677874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.677921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.677965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.678990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.679976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.680021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.680077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.680120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.680166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.680219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.680262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.680305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.680351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.680395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.680438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.680490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.680535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.680573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.680614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.681983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.682983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.683882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.684439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.684485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.684522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.684565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.684611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.684654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.684697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.684741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.684781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.684826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.684875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.684919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.684962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.685985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.868 [2024-07-24 21:32:07.686024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.686994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.687052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.687096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.687613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.687663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.687706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.687750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.687795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.687843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.687883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.687927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.687976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.688952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.689970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.690017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.690072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.690119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.690165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.690215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.690258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.690303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.690354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.690823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.690870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.690910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.690955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.691968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.692970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.693021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.693069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.693101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.693143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.693184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.693224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.693268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.693308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.693347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.693387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.693429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.693469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.694966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.695013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.695066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.695111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.695162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.695204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.695252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.695295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.695341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.695388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.869 [2024-07-24 21:32:07.695431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.695473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.695513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.695560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.695591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.695628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.695675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.695715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.695766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.695807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.695845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.695882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.695921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.695961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.695998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.696729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.697980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.698992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.699931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.700426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.700474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.700514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.700554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.700591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.700630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.700669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.700710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.700763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.700803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.700846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.700894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.700941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.700972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.701985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.702983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.703027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.703076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.703126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.703171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.703680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.703727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.703767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.703806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.703846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.703884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.703924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.703962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.704000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.704050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.870 [2024-07-24 21:32:07.704088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.704986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.705965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.706012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.706065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.706118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.706168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.706210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.706254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.706307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.706348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.706390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.706421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.706917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.706961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.706998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.707991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.708978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.709630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.710984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.711984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.712892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.713395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.713452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.713496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.713541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.871 [2024-07-24 21:32:07.713589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.713630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.713674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.713722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.713767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.713807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.713847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.713884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.713918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.713954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.713995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.714969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.715981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.716028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.716080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.716123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.716628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.716676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.716707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.716744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.716783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.716819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.716856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.716898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.716937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.716978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.717998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.718984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.719025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.719069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.719109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.719151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.719201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.719247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.719763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.719814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.719857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.719902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.719947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.720001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.720050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.720095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.720144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.720186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.720230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.720278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.720322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.872 [2024-07-24 21:32:07.720369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.720415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.720459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.720502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.720549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.720592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.720638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.720687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.720732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.720777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.720823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.720866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.720912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.720962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.721988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.722032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.722080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.722121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.722162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.722202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.722244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.722281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.722324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.722370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.722415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.722461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.722510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:59.873 [2024-07-24 21:32:07.723523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.723973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.724953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.725689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.726996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.727050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.727096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.727146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.727189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.727232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.727279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.727322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.873 [2024-07-24 21:32:07.727368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.727415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.727464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.727508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.727552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.727600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.727648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.727696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.727740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.727785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.727832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.727874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.727919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.727965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.728969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.729014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.729061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.729108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.729654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.729702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.729752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.729800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.729852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.729895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.729944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.729991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.730978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.731996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.732046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.732091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.732139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.732195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.732245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.732294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.732338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.732383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.732427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.732932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.732975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.733985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.734977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.735649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.736134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.874 [2024-07-24 21:32:07.736178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.736954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.737990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.738887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.739412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.739456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.739498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.739541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.739586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.739633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.739681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.739726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.739773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.739822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.739865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.739913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.739959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.740985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.741961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.742009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.742059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.742104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.742151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.742194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.742709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.742760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.742805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.742848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.742901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.742945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.742988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.875 [2024-07-24 21:32:07.743913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.743953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.743993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.744970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.745018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.745068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.745113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.745160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.745208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.745255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.745303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.745349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.745392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.745894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.745940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.745982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.746968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.747962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.748641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.749996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.750985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.751890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.752456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.752503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.752549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.752600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.752641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.752689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.752735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.752786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.752834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.876 [2024-07-24 21:32:07.752880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.752927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.752974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.753989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.754967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.755014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.755067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.755115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.755159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.755201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.755701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.755748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.755796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.755841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.755890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.755938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.755978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.756967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.757982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.758027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.758079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.758124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.758168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.758213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.758257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.758298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.758341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.758381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.758424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.758921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.758963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.758998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.759970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.760979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.877 [2024-07-24 21:32:07.761020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.761064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.761112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.761154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.761197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.761241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.761290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.761340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.761385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.761435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.761482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.761531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.761576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.761626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.762997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.763995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.764821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.765982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.766993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.767957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.768011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.768066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.768552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.768605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.768658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.768704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.768752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.768799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.768842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.768886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.768934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.768978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.769023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.769075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.769119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.878 [2024-07-24 21:32:07.769166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.769980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.770974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.771019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.771062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.771094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.771134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.771171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.771210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.771255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.771788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.771839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.771888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.771935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.771978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.772997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.773960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.774004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.774061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.774108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.774155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.774201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.774245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.774290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.774332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.774377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.774419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.774458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.774496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.774541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:59.879 [2024-07-24 21:32:07.775247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.775982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.776992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.777037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.777082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.777121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.777158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.777190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.777227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.777264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.777300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.777339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.777379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.777423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.879 [2024-07-24 21:32:07.777470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.777510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.777548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.777590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.777628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.777666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.777696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.777735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.778981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.779983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.780976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 [2024-07-24 21:32:07.781018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:59.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.880 21:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.166 [2024-07-24 21:32:07.989320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.989960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.990000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.990040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.166 [2024-07-24 21:32:07.990083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.990983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.991958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.992992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.993962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.994015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.167 [2024-07-24 21:32:07.994060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.994974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.995015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.995525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.995571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.995615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.995657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.995701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.995748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.995791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.995833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.995886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.995931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.995976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.996995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.997993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.998034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.998085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.998139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.998185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.998230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.998413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.998787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.998835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.998878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.998915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.998952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.998988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.999031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.999074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.168 [2024-07-24 21:32:07.999114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:07.999976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.000978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.001025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.001079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.001124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.001177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.001224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.001264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.001305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.001337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.001378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.001848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.001891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.001932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.001971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:00.169 [2024-07-24 21:32:08.002674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.002995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.003041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.003091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.003140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.003187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.003229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.003274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.003321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.003360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.003402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.003440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.003482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.003520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.003571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.169 [2024-07-24 21:32:08.003613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.003656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.003701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.003732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.003767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.003807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.003846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.003886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.003924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.003961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.004000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.004040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.004087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.004132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.004172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.004206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.004251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.004291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.004343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.004385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.004431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.004480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.004527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.004712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.005998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.006989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.007817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.008304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.008350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.008390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.008435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.008481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.008522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.008563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.008604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.008644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.008683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.170 [2024-07-24 21:32:08.008725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.008759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.008798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.008831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.008865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.008908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.008948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.008983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.009968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.010952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.011003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.011185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.011546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.011595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.011635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.011676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.011720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.011764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.011802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.011845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.011885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.011925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.011961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.011999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.012041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.012086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.012126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.012160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.012202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.012246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.012288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.171 [2024-07-24 21:32:08.012332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.012996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.013975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.014022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.014065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.014104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.014147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.014644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.014687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.014726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.014764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.014801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.014839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.014875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.014911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.014948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.014982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.015988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.172 [2024-07-24 21:32:08.016894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.016933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.016975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.017012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.017058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.017096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.017134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.017172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.017210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.017247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.017427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.017840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.017888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.017935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.017981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.018957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 21:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:00.173 [2024-07-24 21:32:08.019003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 21:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:00.173 [2024-07-24 21:32:08.019405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.019989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.020029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.020079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.020123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.020165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.020219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.020264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.020308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.020353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.020394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.020437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.020493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.173 [2024-07-24 21:32:08.021981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.022990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.023917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.024988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.025972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.026018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.026072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.026118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.026164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.026215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.026267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.026311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.026359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.026403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.026448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.026498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.026544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.026586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.174 [2024-07-24 21:32:08.026631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.026674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.026722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.026768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.026814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.026863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.026905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.027405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.027452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.027491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.027531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.027572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.027609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.027647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.027701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.027743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.027788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.027835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.027885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.027929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.027975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.028990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.029989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.030022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.030064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.030102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.030141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.030326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.030713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.030762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.030806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.030849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.030892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.030939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.030987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.175 [2024-07-24 21:32:08.031838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.031875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.031917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.031954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.031993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.032972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.033017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.033069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.033116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.033160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.033206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.033246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.033295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.033341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.033843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.033892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.033935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.033981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.034971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.176 [2024-07-24 21:32:08.035738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.035782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.035835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.035883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.035932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.035977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.036023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.036070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.036115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.036158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.036200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.036247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.036315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.036359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.036404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.036455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.036498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.036542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.036710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.037996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.038985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.039712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.177 [2024-07-24 21:32:08.040893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.040932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.040985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.041972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.042924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.043116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.043165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.043531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.043581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.043630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.043676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.043718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.043769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.043814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.043856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.043893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.043933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.043962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.044999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.045055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.045097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.045132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.045166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.045207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.045249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.045288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.045325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.045359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.045396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.045435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.045473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.178 [2024-07-24 21:32:08.045515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.045553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.045597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.045639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.045681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.045735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.045783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.045826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.045871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.045915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.045957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.046007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.046058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.046105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.046147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.046642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.046686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.046725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.046763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.046800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.046831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.046870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.046909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.046949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.046989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.047968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.048990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.049027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.049073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.049118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.049161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.049201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.049241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.049283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.049328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.049378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.049596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.179 [2024-07-24 21:32:08.050865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.050915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.050956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.051964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.052709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:00.180 [2024-07-24 21:32:08.053343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.053976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.180 [2024-07-24 21:32:08.054783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.054821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.054866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.054909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.054949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.054988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.055902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.056974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.057962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.058957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.059449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.059502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.059543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.059587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.059643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.059687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.059733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.059778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.059824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.181 [2024-07-24 21:32:08.059878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.059922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.059968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.060964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.061994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.062035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.062079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.062124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.062169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.062357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.062740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.062791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.062837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.062880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.062927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.062981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.063959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.064003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.064049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.064090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.064134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.064175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.064216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.064262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.064313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.064358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.064405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.064449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.182 [2024-07-24 21:32:08.064492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.064539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.064583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.064629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.064674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.064722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.064762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.064809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.064851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.064893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.064946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.064994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.065036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.065084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.065125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.065172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.065215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.065267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.065312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.065363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.065411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.065457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.065952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.065987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.066958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.067986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.068825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.069191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.069236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.069267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.069307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.069346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.069388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.069431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.069470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.183 [2024-07-24 21:32:08.069508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.069547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.069588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.069625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.069664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.069709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.069754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.069798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.069846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.069890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.069934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.069981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.070978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.071836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.072999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.073956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.184 [2024-07-24 21:32:08.074001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.074967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.075021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.075070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.075571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.075616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.075655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.075693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.075729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.075764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.075801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.075840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.075876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.075913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.075952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.075982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.076995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.077964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.185 [2024-07-24 21:32:08.078008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.078059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.078101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.078146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.078190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.078680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.078724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.078764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.078797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.078837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.078877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.078915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.078954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.078996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.079976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.080967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.081007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.081054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.081096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.081136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.081175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.081206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.081250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.081287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.081329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.081373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.081409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.081939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.081980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.082970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.083016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.083063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.083108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.083154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.186 [2024-07-24 21:32:08.083200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.083986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.084026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.084079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.084117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.084155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.084193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.084232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.084274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.084309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.084347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.084386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.084425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.084465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.084513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.084557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.085955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.086993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.187 [2024-07-24 21:32:08.087680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.087722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.088988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.089979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.090773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.091976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.188 [2024-07-24 21:32:08.092734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.092771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.092808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.092842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.092877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.092916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.092953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.092992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.093948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.094133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.094485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.094537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.094581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.094626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.094672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.094716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.094765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.094810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.094859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.094904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.094953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.095990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.096030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.096075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.096118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.096164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.096202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.096247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.096302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.096346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.096394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.096441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.096484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.096527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.096572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.189 [2024-07-24 21:32:08.096617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.096665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.096711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.096761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.096805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.096849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.096896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.096945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.096989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.097036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.097085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.097127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.097169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.097647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.097686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.097722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.097761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.097802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.097843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.097882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.097922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.097966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.098989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.099966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.100006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.100054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.100097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.100131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.100166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.100207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.100249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.100294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.100337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.100380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.100420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.100458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.100626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.190 [2024-07-24 21:32:08.101911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.101957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.102996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.103658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:00.191 [2024-07-24 21:32:08.104168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.104976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.105966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.106011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.106057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.106101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.106143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.106188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.106235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.106277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.106321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.106369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.106417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.191 [2024-07-24 21:32:08.106457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.106512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.106556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.106599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.106645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.106688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.106736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.106784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.106830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.107975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.108975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.109962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.110007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.110545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.110604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.110648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.110691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.110735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.110777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.110821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.110873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.110914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.110955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.192 [2024-07-24 21:32:08.111617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.111660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.111701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.111740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.111778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.111819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.111857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.111895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.111929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.111974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.112978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.113019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.113059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.113105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.113144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.113177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.113220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.113257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.113422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.113842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.113889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.113932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.113979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.114981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.115979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.116024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.116070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.116114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.116161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.116202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.116250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.116294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.193 [2024-07-24 21:32:08.116338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.116385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.116432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.116477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.116520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.117996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.118972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.119942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.194 [2024-07-24 21:32:08.120921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.120961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.121984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.122938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.123464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.123512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.123558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.123599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.123641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.123686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.123732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.123780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.123828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.123871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.123912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.123947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.123987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.124960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.125000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.125048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.125086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.125124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.125172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.125219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.125261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.125307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.125353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.125400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.125444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.125490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.195 [2024-07-24 21:32:08.125534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.125581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.125632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.125673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.125719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.125767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.125813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.125858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.125904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.125946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.125991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.126034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.126084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.126131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.126320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.126694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.126739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.126786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.126828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.126870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.126909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.126952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.126994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.127978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.128965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.129009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.129054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.129095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.129134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.129172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.129209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.129258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.129288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.129792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.129835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.129879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.129921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.129958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.129988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.130026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.130072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.130112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.130156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.130194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.130238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.130276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.130317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.130363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.130411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.130464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.130510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.130556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.196 [2024-07-24 21:32:08.130601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.130645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.130691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.130738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.130782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.130824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.130871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.130915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.130959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.131956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.132000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.132050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.132098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.132140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.132186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.132233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.132274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.132316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.132361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.132402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.132445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.132491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.132681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.133997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.134960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.135000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.135045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.135086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.135131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.197 [2024-07-24 21:32:08.135175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.135223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.135270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.135318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.135371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.135412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.135456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.135503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.135547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.135596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.135638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.135683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.136976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.137965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.138873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.139967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.198 [2024-07-24 21:32:08.140016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.140981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.141987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.142028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.142072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.142592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.142648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.142694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.142741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.142790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.142839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.142882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.142928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.142971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.143998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.144047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.144093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.144139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.144186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.144232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.144279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.144317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.199 [2024-07-24 21:32:08.144357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.144981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.145031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.145078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.145126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.145172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.145225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.145270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.145314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.145823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.145874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.145916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.145953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.145996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.146992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.147995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.148045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.148089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.148128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.148166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.148213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.148259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.148297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.148335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.148375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.148405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.148446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.148485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.148992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.149034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.149078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.149117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.149155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.149200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.149237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.149276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.149323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.149367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.200 [2024-07-24 21:32:08.149414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.149457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.149502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.149545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.149590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.149637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.149682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.149730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.149771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.149815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.149859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.149904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.149949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.149994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.150987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.151680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:00.201 [2024-07-24 21:32:08.152856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.152990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.153965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.154007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.154058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.201 [2024-07-24 21:32:08.154097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.154892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.155973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.156949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.157966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.158490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.158542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.158588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.158634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.158682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.158725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.158771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.158820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.158867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.158911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.158957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.159004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.159053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.202 [2024-07-24 21:32:08.159104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.159990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.160992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.161039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.161081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.161117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.161154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.161193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.161740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.161791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.161833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.161877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.161924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.161974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.162976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.163006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.163053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.163094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.203 [2024-07-24 21:32:08.163137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.163973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.164023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.164069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.164120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.164166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.164209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.164261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.164307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.164349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.164394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.164437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.164476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.164957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.164999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.165984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.166954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.167725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.168221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.168263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.168305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.204 [2024-07-24 21:32:08.168345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.168386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.168424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.168464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.168505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.168551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.168603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.168649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.168694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.168742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.168790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.168834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.168881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.168927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.168971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.169983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.170912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.171435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.171487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.171532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.171578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.171617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.171661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.171699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.171740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.171779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.171827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.171867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.171905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.171937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.171981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.205 [2024-07-24 21:32:08.172925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.172967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.173962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.174004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.174047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.174089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.174131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.174163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.174205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.174685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.174731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.174770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.174806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.174845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.174885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.174925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.174968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.175989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.176958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.177000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.177036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.177085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.177135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.177181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.177225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.177270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.177317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.177367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.177411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.177926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.177975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.178020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.178072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.206 [2024-07-24 21:32:08.178123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.178998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.179984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.180706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.181969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.207 [2024-07-24 21:32:08.182921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.182967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.183884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.184395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.184455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.184500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.184546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.184595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.184638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.184685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.184731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.184779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.184837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.184885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.184937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.184983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.185980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.186967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.187009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.187053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.187097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.187588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.187635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.187684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.187731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.187775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.208 [2024-07-24 21:32:08.187821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.187864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.187903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.187939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.187981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.188989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.189984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.190029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.190079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.190124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.190167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.190211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.190731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.190785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.190830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.190875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.190926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.190976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.191963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.209 [2024-07-24 21:32:08.192874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.192910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.192956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.192998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.193041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.193088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.193127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.193162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.193199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.193240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.193285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.193326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.193367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.193409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.193448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.193485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.193530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.194971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.195996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.196791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.197983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.210 [2024-07-24 21:32:08.198586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.198628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.198665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.198707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.198744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.198783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.198825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.198871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.198911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.198962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.199980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.200026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.200072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.200548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.200592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.200627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.200669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.200707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.200749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.200795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.200829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.200859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.200900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.200940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.200984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.201979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.202987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.203033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.203081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.203127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.203169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.203217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.203734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.203784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.203832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.203883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.203934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.203980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:00.211 [2024-07-24 21:32:08.204178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.204999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.205040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.205090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.205130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.205168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.205212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.211 [2024-07-24 21:32:08.205249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.205961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.206012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.206068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.206119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.206166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.206211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.206256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.206300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.206344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.206394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.206443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.206488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.206535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.207968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.208998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.209687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.210984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.211962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.212 [2024-07-24 21:32:08.212723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.212780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.212824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.212863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.212908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.212951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.212990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.213546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.213599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.213647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.213697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.213744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.213793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.213841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.213885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.213929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.213975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.214974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.215996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.216040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.216088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.216128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.216176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.216218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.216262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.216306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.216828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.216877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.216924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.216973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.217967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.218987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.219032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.219082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.219134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.219182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.219228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.219267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.219307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.219352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.219391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.219431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.219470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.219512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.219552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.219593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 [2024-07-24 21:32:08.220086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:00.213 true 00:07:00.473 21:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:00.473 21:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.409 21:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.409 21:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:01.409 21:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:01.669 true 00:07:01.669 21:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:01.669 21:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.928 21:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.929 21:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:01.929 21:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:02.188 true 00:07:02.188 21:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:02.188 21:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.447 21:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.447 [2024-07-24 21:32:10.562498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.447 [2024-07-24 21:32:10.562559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.447 [2024-07-24 21:32:10.562604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.447 [2024-07-24 21:32:10.562652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.447 [2024-07-24 21:32:10.562688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.447 [2024-07-24 21:32:10.562730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.447 [2024-07-24 21:32:10.562770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.447 [2024-07-24 21:32:10.562806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.447 [2024-07-24 21:32:10.562843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.447 [2024-07-24 21:32:10.562879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.447 [2024-07-24 21:32:10.562919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.447 [2024-07-24 21:32:10.562954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.562993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.563950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.564964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.565015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.565061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.565100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.565137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.565782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.565831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.565876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.565922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.565965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.566009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.566060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.566106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.566151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.566195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.566238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.733 [2024-07-24 21:32:10.566296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.566987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.567989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.568036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.568078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.568118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.568154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.568193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.568235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.568270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.568308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.568345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.568382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.568419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.568460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.568501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.568541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.569986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.734 [2024-07-24 21:32:10.570747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.570790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.570832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.570875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.570916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.570957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.570995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.571775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.572989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.573989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.574992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.575472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.575524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.575563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.575607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.575646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.575683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.575727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.575779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.575824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.735 [2024-07-24 21:32:10.575872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.575915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.575952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.575991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.576992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.577987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.578035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.578083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.578131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.578647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.578697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.578742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.578787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.578832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.578881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.578928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.578977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.579957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.580000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.580036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.580083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.580124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.580162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.580201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.736 [2024-07-24 21:32:10.580242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.580959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.581002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.581048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.581093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.581138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.581185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.581233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.581277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.581322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.581369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.581867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.581916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.581956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.581994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.582985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.583976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.584016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.584067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.584111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.584156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.584200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.584244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.584283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.584329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.584364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.584405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.584448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.737 [2024-07-24 21:32:10.584487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.584964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.585984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.586991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.587639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.738 [2024-07-24 21:32:10.588845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.588892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.588938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.588983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.589984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 21:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:02.739 [2024-07-24 21:32:10.590589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.590852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 21:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:02.739 [2024-07-24 21:32:10.591373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.591422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.591461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.591502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.591533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.591571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.591610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.591650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.591695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.591747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.591793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.591836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.591882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.591926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.591965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.592991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.593038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.593089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.593132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.593172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.593212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.593246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.739 [2024-07-24 21:32:10.593287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.593979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.594024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.594074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.594568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.594612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.594651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.594691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.594727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.594768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.594813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.594851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.594888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.594929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.594967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.595986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.596961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.597001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.597039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.597091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.597129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.597174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.597208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.597249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.597785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.597836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.597881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.597922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.597966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.598013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.598065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.598117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.598161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.598207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.598249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.598298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.598344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.740 [2024-07-24 21:32:10.598388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.598990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.599978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.600018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.600062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.600105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.600138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.600176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.600219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.600259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.600299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.600342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.600382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.600421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.600464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.600965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.601994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.602040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.602088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.602131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.602176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.602214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.602252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.602287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.602331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.602372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.602416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.741 [2024-07-24 21:32:10.602460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.602513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.602558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.602605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.602652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.602697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.602744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.602791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.602836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.602877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.602919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.602961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.603661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.604970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.605976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.606020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.606062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.742 [2024-07-24 21:32:10.606102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.606822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:02.743 [2024-07-24 21:32:10.607377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.607424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.607467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.607512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.607556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.607602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.607643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.607686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.607732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.607775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.607822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.607873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.607916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.607959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.608995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.609985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.610031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.610085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.610599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.610645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.610677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.610717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.610756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.610798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.610840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.610878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.610918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.610970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.611005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.611056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.743 [2024-07-24 21:32:10.611096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.611969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.612992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.613031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.613079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.613117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.613163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.613208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.613257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.613301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.613784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.613819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.613863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.613904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.613942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.613985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.614997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.615048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.615096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.615141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.615187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.615233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.615276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.744 [2024-07-24 21:32:10.615325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.615999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.616040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.616089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.616124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.616164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.616202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.616249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.616289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.616327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.616367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.616406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.616451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.616493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.617971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.618981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.619027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.619076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.619115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.619152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.619198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.619241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.619282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.619323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.619359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.619392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.745 [2024-07-24 21:32:10.619433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.619472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.619510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.619550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.619593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.619636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.619674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.619720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.619762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.620997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.621962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.622981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.623502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.623552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.623595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.623640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.623678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.623717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.623760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.623800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.623836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.623881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.623922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.623967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.746 [2024-07-24 21:32:10.624584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.624625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.624666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.624710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.624752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.624794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.624832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.624883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.624925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.624972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.625982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.626013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.626060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.626101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.626141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.626180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.626221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.626740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.626783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.626814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.626855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.626898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.626942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.626985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.627993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.747 [2024-07-24 21:32:10.628632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.628678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.628722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.628761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.628803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.628854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.628896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.628947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.628994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.629039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.629090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.629133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.629181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.629225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.629270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.629316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.629366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.629413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.629459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.629979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.630960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.631983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.632732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.748 [2024-07-24 21:32:10.633235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.633959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.634964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.635881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.636995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.749 [2024-07-24 21:32:10.637860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.637905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.637949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.637992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.638958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.639003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.639050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.639089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.639126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.639165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.639680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.639728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.639763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.639797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.639841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.639886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.639928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.639969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.640982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.641962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.642003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.642040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.642086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.642123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.642159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.642190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.642232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.642271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.642305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.642883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.642933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.642982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.750 [2024-07-24 21:32:10.643024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.643955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.644973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.645647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.646968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.751 [2024-07-24 21:32:10.647730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.647771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.647801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.647833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.647873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.647914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.647954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.647994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.648913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.649425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.649471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.649518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.649569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.649612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.649661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.649709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.649754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.649797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.649842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.649880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.649926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.649969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.650978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.752 [2024-07-24 21:32:10.651702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.651748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.651796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.651843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.651889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.651932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.651977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.652020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.652068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.652111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.652153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.652613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.652657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.652697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.652740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.652777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.652816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.652856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.652887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.652925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.652969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.653953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.654981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.655033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.655087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.655134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.655176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.655222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.655265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.655780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.655831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.655878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.655919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.655965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.753 [2024-07-24 21:32:10.656816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.656857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.656890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.656933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.656970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.657969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.658010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.658060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.658111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.658162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.658211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.658255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.658296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.658342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.658390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.658433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.658478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.658525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:02.754 [2024-07-24 21:32:10.659009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.659982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.660960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.661006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.661058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.661104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.661154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.661200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.661245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.661290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.661336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.754 [2024-07-24 21:32:10.661383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.661424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.661480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.661523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.661570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.661616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.661659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.662950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.663973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.664929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.665994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.666034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.666084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.666126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.666157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.666202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.666235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.666274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.666314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.666358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.666404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.666444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.755 [2024-07-24 21:32:10.666478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.666514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.666560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.666605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.666659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.666701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.666747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.666795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.666836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.666883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.666929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.666970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.667996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.668033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.668539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.668591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.668633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.668671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.668715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.668760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.668804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.668850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.668896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.668942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.668985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.669967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.670982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.671031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.671095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.671139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.756 [2024-07-24 21:32:10.671189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.671234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.671283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.671326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.671827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.671876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.671918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.671965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.672981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.673970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.674015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.674067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.674111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.674160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.674205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.674248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.674299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.674342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.674390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.674432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.674477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.674978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.757 [2024-07-24 21:32:10.675620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.675657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.675703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.675748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.675797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.675843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.675891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.675935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.675984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.676977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.677701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.678998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.679960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.680002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.680051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.680093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.680132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.680168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.680211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.680262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.680306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.758 [2024-07-24 21:32:10.680349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.680399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.680443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.680487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.680531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.680575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.680620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.680669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.680710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.680755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.680805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.680850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.680894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.680943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.681994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.682999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.683965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.684013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.684064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.684108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.684162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.684663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.684705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.684754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.684791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.684830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.684869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.684899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.684935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.684975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.685017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.685064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.685104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.685141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.685179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.685219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.685261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.685300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.685341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.685384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.759 [2024-07-24 21:32:10.685427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.685469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.685517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.685562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.685610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.685657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.685701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.685751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.685794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.685836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.685883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.685929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.685973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.686988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.687031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.687079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.687124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.687167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.687210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.687256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.687312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.687360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.687405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.687899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.687948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.687989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.688969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.689985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.690022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.690066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.690103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.760 [2024-07-24 21:32:10.690145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.690185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.690222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.690266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.690306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.690348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.690386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.690430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.690467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.690507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.690550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.691956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.692973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.693893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.694490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.694540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.694585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.694634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.694681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.694728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.694780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.694830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.694875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.694923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.761 [2024-07-24 21:32:10.694972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.695966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.696970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.697016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.697066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.697116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.697159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.697204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.697247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.697293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.697778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.697819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.697859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.697900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.697949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.697993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.698977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.762 [2024-07-24 21:32:10.699708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.699753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.699803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.699847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.699890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.699931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.699982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.700019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.700069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.700110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.700141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.700181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.700222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.700265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.700302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.700345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.700388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.700432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.700475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.700996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.701980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.702993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.703727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.763 [2024-07-24 21:32:10.704941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.704973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.705995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.706972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.707024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.707533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.707574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.707613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.707659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.707699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.707738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.707787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.707827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.707873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.707917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.707949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.707987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:02.764 [2024-07-24 21:32:10.708400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.708957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.764 [2024-07-24 21:32:10.709671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.709718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.709766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.709818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.709865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.709913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.709957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.710989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.711989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.712962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.713003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.713037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.713081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.713123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.713160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.713198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.713248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.713289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.713327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.713366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.713405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.713458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.713497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.765 [2024-07-24 21:32:10.714944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.714984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.715982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.716845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.717961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.718984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.719031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.766 [2024-07-24 21:32:10.719079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.719992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.720037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.720527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.720571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.720612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.720652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.720690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.720727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.720765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.720796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.720839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.720882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.720925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.720968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.721994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.722980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.723033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.723086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.723134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.723183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.723233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.723280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.723793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.723842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.723890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.723937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.723983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.724032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.724086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.724130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.724179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.724231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.724276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.724323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.767 [2024-07-24 21:32:10.724369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.724414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.724459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.724498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.724539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.724579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.724629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.724674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.724706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.724754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.724814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.724860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.724900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.724939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.724985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.725951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.726000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.726055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.726102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.726149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.726193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.726241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.726288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.726339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.726393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.726440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.726487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.726533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.726582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.727978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.728982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.768 [2024-07-24 21:32:10.729028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.729838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.730358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.730405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.730453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.730513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.730565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.730610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.730657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.730705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.730753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.730796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.730839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.730884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.730931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.730979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.731974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.732983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.733027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.733545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.733591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.733630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.733676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.733714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.733749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.733790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.733827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.733868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.733908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.733954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.733998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.734036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.734080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.734121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.734163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.734206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.769 [2024-07-24 21:32:10.734244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.734956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.735982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.736019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.736068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.736110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.736161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.736199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.736238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.736819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.736870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.736930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.736981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.737978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.738022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.738073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.738118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.738165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.738210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.738251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.738295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.738343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.738385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.738436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.738480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.770 [2024-07-24 21:32:10.738530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.738574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.738620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.738667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.738713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.738757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.738806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.738853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.738896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.738943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.738990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.739686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.740958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.741992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.742844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.743357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.743407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.743450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.743494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.743554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.743600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.743653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.771 [2024-07-24 21:32:10.743698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.743742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.743789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.743834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.743879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.743923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.743977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.744985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.745962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.746009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.746063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.746108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.746627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.746680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.746724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.746769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.746813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.746867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.746914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.746966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.747984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.748026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.748074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.748118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.748157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.748195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.748245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.748297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.748340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.748392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.772 [2024-07-24 21:32:10.748439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.748486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.748532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.748577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.748627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.748672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.748720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.748767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.748820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.748861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.748905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.748954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.749001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.749049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.749095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.749141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.749186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.749233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.749279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.749329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.749373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.749418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.749888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.749932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.749977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.750973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.751984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.752030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.752082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.752130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.752176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.752219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.752263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.752310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.752361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.752406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.752456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.752501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.752542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.752582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.752621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.753192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.753235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.753287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.753330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.753376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.753421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.753472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.753516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.753561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.753606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.753657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.753703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.753754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.773 [2024-07-24 21:32:10.753804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.753857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.753900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.753946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.753993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.754986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.755964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.756009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.756513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.756565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.756614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.756664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.756708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.756752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.756808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.756848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.756896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.756939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.756984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.757969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.758002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.758048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.758089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.758127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.758165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.758212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.758250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.758285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.758325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.758363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.758406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.774 [2024-07-24 21:32:10.758449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.758495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.758535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.758576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.758615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.758657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.758706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.758749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.758794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.758843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.758887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.758933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.758983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.759036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.759087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.759133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.759179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.759223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.759786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.759837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.759887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.759926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.759965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:02.775 [2024-07-24 21:32:10.760065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.760982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.761981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.762024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.762072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.762116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.762159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.762205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.762249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.762299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.762342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.762381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.762421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.762463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.762969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.763019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.763065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.775 [2024-07-24 21:32:10.763106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.763984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.764996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.765780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.766978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 true 00:07:02.776 [2024-07-24 21:32:10.767350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.767977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.768019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.768067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.776 [2024-07-24 21:32:10.768111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.768920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.769543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.769600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.769644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.769688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.769736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.769780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.769826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.769867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.769913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.769957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.770976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.771973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.772017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.772068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.772110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.772147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.772190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.772229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.772271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.772819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.772871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.772916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.772957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.777 [2024-07-24 21:32:10.773782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.773821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.773864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.773913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.773957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.773996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.774957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.775000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.775054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.775099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.775146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.775198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.775242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.775290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.775337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.775383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.775435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.775481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.775523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.775571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.775617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.776965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.777008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.777049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.777092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.777130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.777174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.777213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.777249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.777283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.777321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.777362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.777400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.778 [2024-07-24 21:32:10.777440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.777481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.777525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.777568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.777609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.777648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.777689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.777732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.777778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.777827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.777866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.777912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.777961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.778786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.779982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.780983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.779 [2024-07-24 21:32:10.781770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.781813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.781861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.781906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.781950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.782455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.782502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.782550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.782596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.782641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.782687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.782740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.782786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.782832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.782879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.782925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.782967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.783990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.784962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.785009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.785059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.785107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.785615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.785664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.785709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.785754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.785793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.785833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.785871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.785908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.785949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.785987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.786024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.786067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.780 [2024-07-24 21:32:10.786107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.786980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.787995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.788046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.788091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.788137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.788187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.788235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.788273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.788313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.788799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.788846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.788877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.788917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.788958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.788999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.789966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.781 [2024-07-24 21:32:10.790804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.790851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.790897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.790944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.790986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.791032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.791082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.791128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.791175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.791217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.791261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.791305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.791354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.791394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 21:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:02.782 [2024-07-24 21:32:10.791441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.791487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.791533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.791578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 21:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.782 [2024-07-24 21:32:10.792064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.792968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.793993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.794689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.795233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.795282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.795332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.795377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.795423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.795469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.795514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.795566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.795610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.795655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.795716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.795762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.782 [2024-07-24 21:32:10.795806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.795855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.795895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.795939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.795983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.796966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.797996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.798514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.798565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.798609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.798665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.798708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.798758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.798805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.798848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.798893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.798944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.798987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.799987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.800020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.800061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.800106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.800141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.800180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.800217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.800256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.800292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.800332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.800370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.800412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.783 [2024-07-24 21:32:10.800451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.800493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.800526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.800571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.800620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.800671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.800715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.800764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.800819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.800859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.800905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.800962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.801012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.801061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.801107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.801150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.801194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.801239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.801758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.801807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.801849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.801895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.801946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.801988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.802961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.803963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.804008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.804058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.804114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.804162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.804207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.804262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.804303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.804344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.804388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.804421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.804455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.804947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.804990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.805029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.784 [2024-07-24 21:32:10.805074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.805982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.806981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.807621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.808990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.785 [2024-07-24 21:32:10.809689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.809730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.809775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.809812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.809853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.809900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.809942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.809985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.810803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:02.786 [2024-07-24 21:32:10.811366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.811980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.812984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.813946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.814421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.814463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.786 [2024-07-24 21:32:10.814502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.814539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.814579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.814622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.814665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.814705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.814745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.814783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.814825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.814868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.814910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.814956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.815983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.816974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.817015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.817063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.817103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.817598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.817650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.817693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.817739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.817778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.817809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.817847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.817888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.817928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.817972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.818990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.819032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.819068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.819111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.819154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.787 [2024-07-24 21:32:10.819191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.819986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.820031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.820079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.820122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.820166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.820213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.820255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.820302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.820822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.820870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.820913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.820957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.821965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.822014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.822060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.822108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.822153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.822194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.822240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.822284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.822327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.822369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.822412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.822452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.822495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:02.788 [2024-07-24 21:32:10.822537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.822582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.822620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.822662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.822700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.822760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.822797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.822839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.822885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.822927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.822968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.823010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.823060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.823106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.823137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.823178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.823213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.823253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.823294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.823332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.823373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.823418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.823456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.823498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.823541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.069 [2024-07-24 21:32:10.824779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.824816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.824858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.824899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.824951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.824992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.825966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.826708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.827958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.828984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.829029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.829081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.829125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.070 [2024-07-24 21:32:10.829168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.829890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.830398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.830443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.830488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.830527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.830566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.830606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.830646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.830693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.830731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.830775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.830819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.830865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.830916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.830958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.831995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.832996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.833051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.833098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.833613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.833661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.833705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.833752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.833800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.833847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.833890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.833933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.833980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.834027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.834079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.834124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.071 [2024-07-24 21:32:10.834168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.834972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.835966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.836012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.836062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.836109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.836156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.836199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.836244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.836291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.836803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.836850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.836887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.836927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.836963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.837968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.072 [2024-07-24 21:32:10.838651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.838697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.838736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.838776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.838817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.838862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.838900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.838946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.838983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.839026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.839075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.839122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.839166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.839214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.839261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.839314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.839369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.839408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.839455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.839966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.840981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.841975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.842618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.843138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.843188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.843231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.843275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.843331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.843376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.843423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.843470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.843516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.843560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.843607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.843650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.073 [2024-07-24 21:32:10.843695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.843727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.843767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.843808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.843845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.843885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.843924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.843961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.843999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.844971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.845893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.846994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.847033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.847084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.847123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.847164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.847199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.847238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.847274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.847313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.847349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.847390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.847431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.847467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.847504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.074 [2024-07-24 21:32:10.847538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.847574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.847618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.847658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.847699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.847743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.847783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.847822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.847864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.847902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.847947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.847991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.848976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.849468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.849513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.849558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.849598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.849639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.849676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.849718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.849756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.849798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.849846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.849888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.849935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.849981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.850986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.075 [2024-07-24 21:32:10.851701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.851749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.851793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.851838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.851888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.851925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.851965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.852006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.852050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.852101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.852134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.852174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.852736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.852782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.852830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.852873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.852918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.852969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.853979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.854992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.855028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.855072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.855111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.855152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.855192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.855237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.855273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.855316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.855359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.855400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.855904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.855956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.076 [2024-07-24 21:32:10.856791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.856834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.856881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.856925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.856967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.857978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.858983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.859923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:03.077 [2024-07-24 21:32:10.859964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.077 [2024-07-24 21:32:10.860807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.860850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.860894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.860939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.860991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.861693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.862955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.863961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.864898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.865479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.865529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.865572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.865618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.865664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.865713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.865758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.865807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.865856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.865903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.865950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.078 [2024-07-24 21:32:10.865994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.866963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.867960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.868007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.868056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.868105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.868148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.868194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.868695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.868749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.868797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.868842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.868882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.868914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.868954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.868993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.869034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.079 [2024-07-24 21:32:10.869084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.869993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.870996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.871041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.871093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.871142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.871193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.871234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.871282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.871327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.871373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.871417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.871911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.871957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.871998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.872958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.080 [2024-07-24 21:32:10.873778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.873817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.873862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.873903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.873940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.873979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.874024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.874069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.874109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.874150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.874187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.874228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.874266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.874308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.874345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.874385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.874418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.874461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.874497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.874540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.875959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.876980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.877778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.878282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.878329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.878379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.081 [2024-07-24 21:32:10.878426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.878474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.878528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.878567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.878612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.878653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.878692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.878727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.878765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.878801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.878841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.878882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.878925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.878963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.879998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.880983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.881496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.881545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.881584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.881619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.881660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.881698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.881738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.881775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.881812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.881851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.881894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.881934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.881975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.882988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.883033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.883085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.082 [2024-07-24 21:32:10.883129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.883983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.884031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.884082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.884128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.884172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.884226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.884711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.884751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.884786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.884828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.884870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.884909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.884948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.884985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.885967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.886967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.887009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.887058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.887098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.887141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.887185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.887230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.083 [2024-07-24 21:32:10.887273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.887329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.887838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.887885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.887932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.887983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.888979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.889976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.890013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.890057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.890095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.890131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.890162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.890202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.890239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.890278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.890316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.890357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.890395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.890440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.890477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.890982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.891966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.892000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.892040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.892083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.084 [2024-07-24 21:32:10.892123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.892978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.893647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.894971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.895953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.085 [2024-07-24 21:32:10.896834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.896874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.897977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.898987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.899985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.900032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.900082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.900575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.900624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.900666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.900713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.900749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.900792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.900832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.900872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.900916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.900961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.086 [2024-07-24 21:32:10.901954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.902961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.903009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.903061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.903107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.903160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.903202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.903250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.903296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.903343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.903391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.903437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.903931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.903980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.904978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.905981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.906629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.907175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.907219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.087 [2024-07-24 21:32:10.907257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.907958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.908978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.909991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.910545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.910598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.910643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.910687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.910739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.910787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.910832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.910876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.910922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.910974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.911020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:03.088 [2024-07-24 21:32:10.911072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.911117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.911170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.911213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.911257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.911289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.911331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.911372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.911412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.911450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.088 [2024-07-24 21:32:10.911491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.911529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.911578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.911621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.911658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.911690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.911729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.911766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.911805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.911845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.911884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.911931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.911970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.912959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.913006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.913058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.913108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.913159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.913204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.913251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.913307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.913823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.913873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.913919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.913963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.914960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.915974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.916023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.916071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.916122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.916165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.916214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.089 [2024-07-24 21:32:10.916263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.916301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.916341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.916381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.916424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.916465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.916507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.916543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.916579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.917997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.918995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.919773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.920979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.921017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.921063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.921109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.921147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.921189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.921226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.921277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.921308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.921347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.090 [2024-07-24 21:32:10.921398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.921440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.921483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.921522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.921565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.921604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.921645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.921683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.921727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.921774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.921810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.921847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.921886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.921926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.921968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.922981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.923030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.923079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.923572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.923615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.923654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.923704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.923746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.923790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.923830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.923875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.923909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.923947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.923984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.924996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.925963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.926005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.926047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.091 [2024-07-24 21:32:10.926090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.926130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.926172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.926213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.926254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.926773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.926826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.926875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.926916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.926964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.927980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.928963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.929011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.929057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.929097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.929140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.929183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.929231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.929274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.929319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.929368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.929411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.929456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.929502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.929546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.930985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.931026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.931073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.931111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.931141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.931177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.931218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.092 [2024-07-24 21:32:10.931260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.931970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.932792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.933976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.934982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.093 [2024-07-24 21:32:10.935611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.935651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.935689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.935727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.935774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.935818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.935850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.935891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.935932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.935972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.936012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.936056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.936548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.936597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.936644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.936699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.936744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.936789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.936836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.936886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.936931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.936981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.937971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.938986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.939035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.939087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.939134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.939182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.939228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.939280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.939326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.939372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.939866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.939909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.939946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.939983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.094 [2024-07-24 21:32:10.940826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.940872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.940917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.940961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.941980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.942602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.943995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.944978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.945020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.945061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.945100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.945137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.945177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.945219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.945261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.945304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.945348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.945391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.945435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.945473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.945511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.095 [2024-07-24 21:32:10.945547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.945593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.945635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.945682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.945733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.945782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.946973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.947984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.948967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.949016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.949065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.949117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.949608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.949655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.949695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.949734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.949774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.949814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.949854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.949896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.949938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.949973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.096 [2024-07-24 21:32:10.950796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.950843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.950891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.950937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.950986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.951973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.952017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.952061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.952108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.952147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.952184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.952215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.952259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.952298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.952890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.952939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.952989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.953987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.954963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.955003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.097 [2024-07-24 21:32:10.955049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.955086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.955123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.955162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.955204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.955244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.955283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.955320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.955365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.955404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.955447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.955488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.955532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.955580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.955625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.956973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.957957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.958834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.959974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.960013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.960057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.098 [2024-07-24 21:32:10.960101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.960991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.961961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.962004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 [2024-07-24 21:32:10.962052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.099 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:03.099 21:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.383 [2024-07-24 21:32:11.173618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.173681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.173726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.173774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.173813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.173853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.173893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.173929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.173969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.174010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.174057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.174098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.174139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.174178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.174216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.174250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.174287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.174327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.174366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.174402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.174442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.174483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.383 [2024-07-24 21:32:11.174512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.174550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.174594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.174633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.174673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.174715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.174755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.174797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.174838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.174877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.174918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.174963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.175991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.176036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.176084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.176129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.176175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.176219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.176262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.176766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.176814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.176857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.176915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.176964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.177959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.384 [2024-07-24 21:32:11.178687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.178724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.178768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.178807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.178838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.178876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.178915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.178952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.178989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.179027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.179077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.179116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.179151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.179192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.179229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.179269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.179306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.179345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.179385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.179898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.179950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.179994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.180986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.181982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.182687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.183178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.183220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.183252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.183291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.183331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.183375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.183412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.183444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.183482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.183522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.183561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.183607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.183640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.385 [2024-07-24 21:32:11.183669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.183698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.183737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.183776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.183821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.183861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.183902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.183940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.183983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.184965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.185866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.186956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:03.386 [2024-07-24 21:32:11.187141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.187991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.188032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.188075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.188114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.188146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.188176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.386 [2024-07-24 21:32:11.188205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.188772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.189981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.190990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.191723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.387 [2024-07-24 21:32:11.192885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.192932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.192977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.193966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.194947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.195997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.196039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.196088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.196121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.196151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.196180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.196208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.196239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.196278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.196321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.196360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.196390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.196431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.196474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.388 [2024-07-24 21:32:11.196512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.196555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.196595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.196639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.196676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.196723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.196767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.196811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.196854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.196902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.196952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.196999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.197987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.198490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.198541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.198593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.198635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.198676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.198739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.198783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.198827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.198879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.198919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.198967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.199959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.200977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.201019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.201067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.389 [2024-07-24 21:32:11.201106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.201145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.201186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.201696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.201746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.201791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.201836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.201878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.201926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.201975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.202962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.203997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.204041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.204090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.204133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.204174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.204218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.204725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.204771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.204811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 21:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:03.390 [2024-07-24 21:32:11.204850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.204893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.204932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.204976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.205007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.205050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.205090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.205131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.205174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.205220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 21:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:03.390 [2024-07-24 21:32:11.205257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.205297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.205342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.205381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.205421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.390 [2024-07-24 21:32:11.205462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.205505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.205550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.205589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.205639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.205685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.205730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.205776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.205819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.205863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.205911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.205957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.206960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.207006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.207058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.207105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.207148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.207193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.207240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.207283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.207327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.207368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.207412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.207453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.207492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.207540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.208993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.209975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.391 [2024-07-24 21:32:11.210015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.210702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.211998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.212963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.213918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.214433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.214482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.214529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.214574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.214622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.214669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.214715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.214756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.214790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.392 [2024-07-24 21:32:11.214829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.214872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.214907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.214944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.214981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.215957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.216989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.217033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.217080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.217129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.217647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.217697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.217737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.217772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.217811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.217848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.217885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.217924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.217966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.218013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.218057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.218099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.218143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.218182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.218222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.218260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.218291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.218340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.218379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.218417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.218457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.218496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.393 [2024-07-24 21:32:11.218535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.218579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.218616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.218662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.218701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.218739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.218781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.218819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.218860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.218895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.218931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.218972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.219995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.220994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.221999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.394 [2024-07-24 21:32:11.222779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.222821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.222863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.222909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.222951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.222995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.223046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.223087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.223137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.223183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.223232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.223279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.223328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.223829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.223873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.223911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.223949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.223988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.224998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.225963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.226007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.226055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.226100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.226145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.226187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.226234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.226279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.226325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.226369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.226410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.226461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.226508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.395 [2024-07-24 21:32:11.227565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.227989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.228957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.229988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.230989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.231027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.231067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.231116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.231155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.231196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.396 [2024-07-24 21:32:11.231242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.231969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.232015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.232069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.232122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.232171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.232654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.232702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.232744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.232786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.232816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.232862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.232899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.232940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.232980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.233975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.234973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.235006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.235054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.235091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.235132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.235172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.235211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.235248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.235285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.235328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.235831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.235877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.235915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.235954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.235995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.236031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.236078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.236121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.236170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.236215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.236262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 [2024-07-24 21:32:11.236306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.397 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:03.397 [2024-07-24 21:32:11.236357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.236402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.236447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.236492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.236538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.236584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.236627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.236672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.236716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.236760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.236803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.236847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.236890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.236934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.236974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.237984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.238021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.238054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.238082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.238117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.238161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.238199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.238240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.238278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.238317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.238354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.238398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.238890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.238936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.238984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.239988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.240032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.240077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.240119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.240159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.240195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.240234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.398 [2024-07-24 21:32:11.240282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.240965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.241007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.241055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.241096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.241131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.241177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.241226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.241272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.241312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.241359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.241406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.241451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.241495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.241542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.241587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.242996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.243973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.399 [2024-07-24 21:32:11.244787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.245358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.245408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.245454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.245497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.245540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.245585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.245633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.245676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.245723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.245770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.245817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.245861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.245909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.245960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.246998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.247977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.248023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.248079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.248582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.248628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.248664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.248702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.248742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.248780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.248819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.248859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.248900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.248938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.248975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.400 [2024-07-24 21:32:11.249929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.249968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.250958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.251004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.251056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.251102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.251146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.251194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.251238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.251277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.251319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.251796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.251839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.251880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.251919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.251962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.252977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.253974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.254018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.254061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.254100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.254136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.254173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.254215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.254254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.254291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.254330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.254369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.254408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.254451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.254502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.401 [2024-07-24 21:32:11.255015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.255953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.256970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.257704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.258969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.402 [2024-07-24 21:32:11.259010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.259971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.260942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.261989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.262993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.403 [2024-07-24 21:32:11.263739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.263779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.263812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.263856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.263896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.263938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.263985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.264027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.264072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.264121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.264641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.264690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.264724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.264753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.264794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.264827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.264856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.264886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.264914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.264943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.264971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.265994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.266969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.267013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.267061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.267096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.267135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.267173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.267213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.267251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.267731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.267781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.267822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.267859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.267901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.267946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.267995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.404 [2024-07-24 21:32:11.268834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.268880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.268911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.268955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.269973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.270990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.271990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.272958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.273003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.273050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.273098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.273145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.273190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.273237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.405 [2024-07-24 21:32:11.273283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.273330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.273371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.273422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.273467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.273516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.274979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.275965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.276700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.277992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.278039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.278092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.278135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.278179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.278230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.406 [2024-07-24 21:32:11.278274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.278985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.279867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.280393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.280442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.280501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.280546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.280591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.280637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.280688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.280735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.280783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.280833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.280876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.280920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.280970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.281978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.282015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.282061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.282101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.282136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.407 [2024-07-24 21:32:11.282173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.282988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.283033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.283084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.283130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.283626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.283667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.283713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.283759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.283801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.283840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.283879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.283922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.283962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.284981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.285965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.286018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.286069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.286114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.286173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.286219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.286263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.286316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.286819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.286861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.286902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.286942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.286985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.287024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.287066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:03.408 [2024-07-24 21:32:11.287111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.287146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.287183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.408 [2024-07-24 21:32:11.287224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.287959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.288973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.289613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.290960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.409 [2024-07-24 21:32:11.291907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.291957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.292794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.293383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.293427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.293467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.293512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.293551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.293598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.293642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.293684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.293729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.293771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.293823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.293873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.293919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.293962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.294974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.295986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.296026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.296068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.296107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.296146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.296192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.296667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.296710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.296757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.296796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.296832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.296870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.296908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.296946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.296985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.297033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.297072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.297116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.297159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.410 [2024-07-24 21:32:11.297202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.297997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.298981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.299019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.299063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.299104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.299141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.299180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.299219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.299257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.299770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.299817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.299861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.299905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.299957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.300992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.301034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.301077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.301117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.301159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.301200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.301240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.301285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.301327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.301365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.301405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.301451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.411 [2024-07-24 21:32:11.301500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.301540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.301582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.301628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.301670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.301712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.301757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.301805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.301847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.301893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.301938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.301983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.302028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.302081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.302125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.302166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.302223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.302263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.302306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.302351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.302394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.302437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.302490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.302530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.303967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.304984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.305662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.306190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.306226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.306266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.412 [2024-07-24 21:32:11.306302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.306992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.307962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.308809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.309345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.309397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.309443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.309490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.309538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.309586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.309630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.309678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.309724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.309776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.309821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.309865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.309917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.309961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.413 [2024-07-24 21:32:11.310966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.311975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.312019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.312058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.312614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.312666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.312710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.312758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.312805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.312847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.312891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.312950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.312995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.313992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.314963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.315011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.315056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.315094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.315132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.315175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.315221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.315261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.315301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.315338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.315835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.315884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.315932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.315980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.414 [2024-07-24 21:32:11.316024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.316977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.317980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.318028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.318083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.318128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.318168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.318211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.318261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.318312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.318361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.318408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.318450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.318492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.318532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.318569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.319991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.415 [2024-07-24 21:32:11.320686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.320724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.320764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.320805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.320843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.320883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.320930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.320974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.321827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.322982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.323954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.416 [2024-07-24 21:32:11.324757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.324802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.324847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.324888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.324932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.324970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.325466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.325514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.325561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.325599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.325640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.325680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.325726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.325770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.325814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.325864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.325913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.325959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.326987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.327963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.328007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.328056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.328111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.328156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.328200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.328248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.328296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.328799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.328853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.328890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.328935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.328980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.417 [2024-07-24 21:32:11.329961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.330997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.331052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.331100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.331144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.331193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.331238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.331283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.331330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.331372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.331416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.331467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.331963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.332964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.333985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.334022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.334061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.334100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.334144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.334191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.334229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.334268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.334299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.334340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.334381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.334421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.334458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.418 [2024-07-24 21:32:11.334494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.334533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.334572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.334615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.335974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.336970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.337801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:03.419 [2024-07-24 21:32:11.338310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.338987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.339025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.339074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.339117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.339155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.339188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.339228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.339274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.339314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.339355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.339396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.339437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.339475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.419 [2024-07-24 21:32:11.339519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.339564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.339611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.339660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.339706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.339751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.339797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.339842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.339889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.339932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.339975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.340985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.341029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.341599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.341649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.341697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.341742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.341787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.341841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.341887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.341937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.341977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.342982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.343988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.344023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.344070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.344107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.344147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.344187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.420 [2024-07-24 21:32:11.344227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.344267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.344307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.344846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.344895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.344941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.344987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.345995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.346965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.347648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.421 [2024-07-24 21:32:11.348820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.348864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.348920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.348961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.349986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.350835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.351367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.351414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.351449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.351489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.351528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.351569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.351611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.351658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.351703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.351757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.351803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.351850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.351895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.351945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.352956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.422 [2024-07-24 21:32:11.353773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.353812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.353851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.353890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.353926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.353966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.354010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.354054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.354093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.354138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.354652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.354703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.354751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.354796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.354842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.354887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.354931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.354977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.355982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.356975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.357016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.357062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.357104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.357145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.357188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.357229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.357267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.357314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.357834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.357882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.357936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.357982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.358960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.359005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.359057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.359104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.359155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.359200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.423 [2024-07-24 21:32:11.359248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.359975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.360586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.361998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.362957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.363805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.364294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.364341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.364378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.364416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.364455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.364497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.364536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.364571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.424 [2024-07-24 21:32:11.364609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.364652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.364694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.364732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.364773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.364815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.364855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.364898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.364943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.364995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.365980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.366996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.367039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.367083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.367548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.367592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.367629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.367669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.367708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.367748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.367788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.367831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.367871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.367916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.367964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.368961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.369007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.425 [2024-07-24 21:32:11.369051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.369925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.370489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.370535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.370575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.370614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.370649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.370691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.370734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.370776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.370816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.370862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.370907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.370954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.371972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.372975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.373018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.373069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.373114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.373159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.373209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.373254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.373740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.373783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.373820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.373867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.373913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.373957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.373994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.374032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.374077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.374117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.374154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.374185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.426 [2024-07-24 21:32:11.374223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.374987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.375995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.376038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.376092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.376136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.376176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.376216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.376262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.376300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.376340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.376821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.376855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.376884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.376913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.376942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 true 00:07:03.427 [2024-07-24 21:32:11.376971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.376999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.377969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.378954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.379003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.379047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.427 [2024-07-24 21:32:11.379094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.379150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.379194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.379237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.379274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.379320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.379363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.379401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.379440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.379476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.380968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.381995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.382762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.383289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.383337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.428 [2024-07-24 21:32:11.383376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.383974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.384991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.385956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.386009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.386490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.386539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.386578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.386623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.386670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.386710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.386742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.386781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.386823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.386866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.386903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.386949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.386987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.387027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.387072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.387127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.387175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.387220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.387268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.387316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.387366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.387411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:03.429 [2024-07-24 21:32:11.387458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.387504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.387549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.387593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.387638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.429 [2024-07-24 21:32:11.387680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.387725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.387783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.387829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.387875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.387922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.387966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.388979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.389017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.389063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.389104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.389143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.389178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 21:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:03.430 21:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.430 [2024-07-24 21:32:11.389705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.389757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.389804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.389848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.389900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.389946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.389989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.390995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.391035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.391087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.391131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.430 [2024-07-24 21:32:11.391180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.391976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.392014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.392074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.392117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.392165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.392214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.392263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.392308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.392361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.392403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.392452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.392952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.393965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.394966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.395618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.396106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.431 [2024-07-24 21:32:11.396149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.396985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.397964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.398761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.399979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.432 [2024-07-24 21:32:11.400618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.400660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.400697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.400737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.400783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.400820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.400858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.400904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.400947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.400995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.401962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.402006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.402509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.402553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.402593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.402630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.402669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.402704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.402744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.402787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.402825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.402874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.402919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.402956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.402999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.403983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.433 [2024-07-24 21:32:11.404994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.405036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.405085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.405147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.405189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.405711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.405758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.405803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.405846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.405885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.405929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.405970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.406990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.407970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.408008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.408052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.408099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.408144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.408183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.408224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.408262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.408307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.408353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.408391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.408433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.408471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.434 [2024-07-24 21:32:11.409996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.410993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.411800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.412960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.435 [2024-07-24 21:32:11.413612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.413652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.413692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.413728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.413757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.413796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.413834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.413874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.413922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.413970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.414964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.415005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.415516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.415562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.415602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.415640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.415672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.415713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.415748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.415787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.415825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.415865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.415905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.415952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.415988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.416982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.417970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.418008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.418053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.418097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.418142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.418182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.418216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.436 [2024-07-24 21:32:11.418731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.418776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.418824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.418867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.418912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.418957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.419988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.420992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.421034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.421075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.421117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.421153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.421195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.421234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.421275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.421312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.421355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.421401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.421452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.421941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.421991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.422990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.423027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.423073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.423113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.423153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.423190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.437 [2024-07-24 21:32:11.423233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.423997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.424734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.425975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.426986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.438 [2024-07-24 21:32:11.427986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.439 [2024-07-24 21:32:11.428035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:03.439 [2024-07-24 21:32:11.428600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:04.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.376 21:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.635 21:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:04.635 21:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:04.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.894 true 00:07:04.894 21:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:04.894 21:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.155 21:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.155 21:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:05.155 21:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:05.419 true 00:07:05.419 21:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:05.419 21:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.684 21:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.943 21:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:05.943 21:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:05.943 true 00:07:05.943 21:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:05.943 21:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.202 21:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.461 21:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:06.461 21:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:06.461 true 00:07:06.461 21:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:06.461 21:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.837 21:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.096 21:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:08.096 21:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:08.096 true 00:07:08.096 21:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:08.096 21:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.035 21:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.294 21:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:09.294 21:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:09.294 true 00:07:09.294 21:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:09.294 21:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.553 21:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.812 21:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:09.812 21:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:09.812 true 00:07:09.812 21:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:09.812 21:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.191 21:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.191 21:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:11.191 21:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:11.451 true 00:07:11.451 21:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:11.451 21:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.389 21:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.389 21:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:12.389 21:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:12.648 true 00:07:12.648 21:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:12.648 21:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.908 21:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.168 21:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:13.168 21:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:13.168 true 00:07:13.168 21:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:13.168 21:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.547 21:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.547 21:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:14.547 21:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:14.807 true 00:07:14.807 21:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:14.807 21:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.749 21:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.749 21:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:15.749 21:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:16.008 true 00:07:16.008 21:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:16.008 21:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.267 21:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.267 21:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:16.267 21:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:16.527 true 00:07:16.527 21:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:16.527 21:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.909 21:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.909 21:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:17.909 21:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:18.170 true 00:07:18.170 21:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:18.170 21:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.109 21:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.109 21:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:19.109 21:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:19.369 true 00:07:19.369 21:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:19.369 21:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.369 21:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.628 21:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:19.628 21:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:19.886 true 00:07:19.886 21:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:19.886 21:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.912 21:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.171 21:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:21.171 21:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:21.430 true 00:07:21.430 21:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:21.430 21:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.367 21:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.367 21:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:22.367 21:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:22.627 true 00:07:22.627 21:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:22.627 21:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.627 Initializing NVMe Controllers 00:07:22.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:22.627 Controller IO queue size 128, less than required. 00:07:22.627 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:22.627 Controller IO queue size 128, less than required. 00:07:22.627 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:22.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:22.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:22.627 Initialization complete. Launching workers. 00:07:22.627 ======================================================== 00:07:22.627 Latency(us) 00:07:22.627 Device Information : IOPS MiB/s Average min max 00:07:22.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3081.18 1.50 26531.82 1708.63 1085321.25 00:07:22.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15669.33 7.65 8148.53 1919.70 306293.94 00:07:22.627 ======================================================== 00:07:22.627 Total : 18750.51 9.16 11169.37 1708.63 1085321.25 00:07:22.627 00:07:22.627 21:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.886 21:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:22.886 21:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:23.146 true 00:07:23.146 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2902340 00:07:23.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2902340) - No such process 00:07:23.146 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2902340 00:07:23.146 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.146 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.406 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:23.406 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:23.406 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:23.406 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.406 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:23.664 null0 00:07:23.664 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:23.664 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.664 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:23.664 null1 00:07:23.923 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:23.923 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.923 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:23.923 null2 00:07:23.923 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:23.923 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.923 21:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:24.182 null3 00:07:24.182 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.182 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.182 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:24.442 null4 00:07:24.442 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.442 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.442 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:24.442 null5 00:07:24.442 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.442 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.442 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:24.701 null6 00:07:24.701 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.701 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.701 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:24.961 null7 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:24.961 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2907948 2907950 2907953 2907957 2907958 2907960 2907962 2907964 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.962 21:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.962 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.962 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.221 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.221 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.221 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.222 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.480 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.480 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.480 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.480 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.480 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.480 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.480 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.480 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.739 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.740 21:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.999 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.999 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.999 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.999 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.999 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.000 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.259 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.260 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.260 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.260 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.260 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.260 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.260 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.260 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.519 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.520 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.520 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.520 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.520 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.520 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.520 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.520 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.520 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.780 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.040 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.041 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.041 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.041 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.041 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.041 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.041 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.041 21:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.300 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.301 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.301 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.561 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.821 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.821 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.821 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.821 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.821 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.822 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.822 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.822 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.822 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.822 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.822 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.822 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.822 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.822 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.082 21:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.082 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.082 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.082 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.082 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.082 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.082 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.082 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.082 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.341 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.341 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.341 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.341 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.341 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.341 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.341 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.341 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.342 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.602 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:28.862 rmmod nvme_tcp 00:07:28.862 rmmod nvme_fabrics 00:07:28.862 rmmod nvme_keyring 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2901849 ']' 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2901849 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2901849 ']' 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2901849 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2901849 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2901849' 00:07:28.862 killing process with pid 2901849 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2901849 00:07:28.862 21:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2901849 00:07:29.123 21:32:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:29.123 21:32:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:29.123 21:32:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:29.123 21:32:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:29.123 21:32:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:29.123 21:32:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.123 21:32:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.123 21:32:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.033 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:31.033 00:07:31.033 real 0m46.698s 00:07:31.033 user 3m12.667s 00:07:31.033 sys 0m15.723s 00:07:31.033 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.033 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:31.033 ************************************ 00:07:31.033 END TEST nvmf_ns_hotplug_stress 00:07:31.033 ************************************ 00:07:31.033 21:32:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:31.033 21:32:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:31.033 21:32:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.033 21:32:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:31.033 ************************************ 00:07:31.033 START TEST nvmf_delete_subsystem 00:07:31.033 ************************************ 00:07:31.033 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:31.293 * Looking for test storage... 00:07:31.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.293 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:31.294 21:32:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:36.574 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:36.574 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:36.574 Found net devices under 0000:86:00.0: cvl_0_0 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:36.574 Found net devices under 0000:86:00.1: cvl_0_1 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.574 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:36.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:07:36.575 00:07:36.575 --- 10.0.0.2 ping statistics --- 00:07:36.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.575 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:07:36.575 00:07:36.575 --- 10.0.0.1 ping statistics --- 00:07:36.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.575 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:36.575 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:36.835 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:36.835 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:36.835 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:36.835 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.835 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2912113 00:07:36.835 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2912113 00:07:36.835 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:36.835 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2912113 ']' 00:07:36.835 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.835 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.835 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.835 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.835 21:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.835 [2024-07-24 21:32:44.746440] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:07:36.835 [2024-07-24 21:32:44.746486] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.835 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.835 [2024-07-24 21:32:44.804823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:36.835 [2024-07-24 21:32:44.885331] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.835 [2024-07-24 21:32:44.885366] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.835 [2024-07-24 21:32:44.885373] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.835 [2024-07-24 21:32:44.885379] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.835 [2024-07-24 21:32:44.885385] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.835 [2024-07-24 21:32:44.885419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.835 [2024-07-24 21:32:44.885422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.455 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.455 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:07:37.455 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:37.455 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:37.455 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 [2024-07-24 21:32:45.601898] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 [2024-07-24 21:32:45.622046] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 NULL1 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 Delay0 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2912362 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:37.714 21:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:37.714 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.714 [2024-07-24 21:32:45.702766] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:39.621 21:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.621 21:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.621 21:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.881 Read completed with error (sct=0, sc=8) 00:07:39.881 Read completed with error (sct=0, sc=8) 00:07:39.881 starting I/O failed: -6 00:07:39.881 Write completed with error (sct=0, sc=8) 00:07:39.881 Read completed with error (sct=0, sc=8) 00:07:39.881 Write completed with error (sct=0, sc=8) 00:07:39.881 Read completed with error (sct=0, sc=8) 00:07:39.881 starting I/O failed: -6 00:07:39.881 Write completed with error (sct=0, sc=8) 00:07:39.881 Write completed with error (sct=0, sc=8) 00:07:39.881 Read completed with error (sct=0, sc=8) 00:07:39.881 Read completed with error (sct=0, sc=8) 00:07:39.881 starting I/O failed: -6 00:07:39.881 Write completed with error (sct=0, sc=8) 00:07:39.881 Read completed with error (sct=0, sc=8) 00:07:39.881 Read completed with error (sct=0, sc=8) 00:07:39.881 Read completed with error (sct=0, sc=8) 00:07:39.881 starting I/O failed: -6 00:07:39.881 Read completed with error (sct=0, sc=8) 00:07:39.881 Write completed with error (sct=0, sc=8) 00:07:39.881 Read completed with error (sct=0, sc=8) 00:07:39.881 Read completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 [2024-07-24 21:32:47.926318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f689800d000 is same with the state(5) to be set 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Write completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 Read completed with error (sct=0, sc=8) 00:07:39.882 starting I/O failed: -6 00:07:39.882 [2024-07-24 21:32:47.927654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1924710 is same with the state(5) to be set 00:07:40.822 [2024-07-24 21:32:48.884429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1925ac0 is same with the state(5) to be set 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 [2024-07-24 21:32:48.929868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19243e0 is same with the state(5) to be set 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 [2024-07-24 21:32:48.930371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1924a40 is same with the state(5) to be set 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Write completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.822 Read completed with error (sct=0, sc=8) 00:07:40.823 Read completed with error (sct=0, sc=8) 00:07:40.823 Write completed with error (sct=0, sc=8) 00:07:40.823 Write completed with error (sct=0, sc=8) 00:07:40.823 Read completed with error (sct=0, sc=8) 00:07:40.823 Write completed with error (sct=0, sc=8) 00:07:40.823 Read completed with error (sct=0, sc=8) 00:07:40.823 Read completed with error (sct=0, sc=8) 00:07:40.823 Read completed with error (sct=0, sc=8) 00:07:40.823 [2024-07-24 21:32:48.930560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1924000 is same with the state(5) to be set 00:07:40.823 Read completed with error (sct=0, sc=8) 00:07:40.823 Read completed with error (sct=0, sc=8) 00:07:40.823 Read completed with error (sct=0, sc=8) 00:07:40.823 Read completed with error (sct=0, sc=8) 00:07:40.823 Write completed with error (sct=0, sc=8) 00:07:40.823 Read completed with error (sct=0, sc=8) 00:07:40.823 Read completed with error (sct=0, sc=8) 00:07:40.823 Write completed with error (sct=0, sc=8) 00:07:40.823 Read completed with error (sct=0, sc=8) 00:07:40.823 Read completed with error (sct=0, sc=8) 00:07:40.823 Read completed with error (sct=0, sc=8) 00:07:40.823 Write completed with error (sct=0, sc=8) 00:07:40.823 Write completed with error (sct=0, sc=8) 00:07:40.823 Write completed with error (sct=0, sc=8) 00:07:40.823 [2024-07-24 21:32:48.930643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f689800d330 is same with the state(5) to be set 00:07:40.823 Initializing NVMe Controllers 00:07:40.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:40.823 Controller IO queue size 128, less than required. 00:07:40.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:40.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:40.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:40.823 Initialization complete. Launching workers. 00:07:40.823 ======================================================== 00:07:40.823 Latency(us) 00:07:40.823 Device Information : IOPS MiB/s Average min max 00:07:40.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 179.67 0.09 999640.83 453.40 2001474.35 00:07:40.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.89 0.07 894553.38 230.72 1043728.47 00:07:40.823 ======================================================== 00:07:40.823 Total : 330.56 0.16 951672.99 230.72 2001474.35 00:07:40.823 00:07:40.823 [2024-07-24 21:32:48.931289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1925ac0 (9): Bad file descriptor 00:07:40.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:40.823 21:32:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.823 21:32:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:40.823 21:32:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2912362 00:07:40.823 21:32:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2912362 00:07:41.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2912362) - No such process 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2912362 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2912362 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2912362 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.392 [2024-07-24 21:32:49.459109] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2913028 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2913028 00:07:41.392 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.392 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.651 [2024-07-24 21:32:49.518968] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:41.909 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.910 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2913028 00:07:41.910 21:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.477 21:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.477 21:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2913028 00:07:42.477 21:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.046 21:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.046 21:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2913028 00:07:43.046 21:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.616 21:32:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.616 21:32:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2913028 00:07:43.616 21:32:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.185 21:32:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.185 21:32:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2913028 00:07:44.185 21:32:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.445 21:32:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.445 21:32:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2913028 00:07:44.445 21:32:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.704 Initializing NVMe Controllers 00:07:44.704 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:44.704 Controller IO queue size 128, less than required. 00:07:44.704 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:44.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:44.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:44.704 Initialization complete. Launching workers. 00:07:44.704 ======================================================== 00:07:44.704 Latency(us) 00:07:44.704 Device Information : IOPS MiB/s Average min max 00:07:44.704 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004226.79 1000421.58 1043384.02 00:07:44.704 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005223.96 1000382.20 1012509.02 00:07:44.705 ======================================================== 00:07:44.705 Total : 256.00 0.12 1004725.37 1000382.20 1043384.02 00:07:44.705 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2913028 00:07:44.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2913028) - No such process 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2913028 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.965 rmmod nvme_tcp 00:07:44.965 rmmod nvme_fabrics 00:07:44.965 rmmod nvme_keyring 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2912113 ']' 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2912113 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2912113 ']' 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2912113 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:07:44.965 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.226 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2912113 00:07:45.226 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.226 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.226 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2912113' 00:07:45.226 killing process with pid 2912113 00:07:45.226 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2912113 00:07:45.226 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2912113 00:07:45.226 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:45.226 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:45.226 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:45.226 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:45.226 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:45.226 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.226 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.226 21:32:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:47.766 00:07:47.766 real 0m16.230s 00:07:47.766 user 0m30.598s 00:07:47.766 sys 0m4.933s 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.766 ************************************ 00:07:47.766 END TEST nvmf_delete_subsystem 00:07:47.766 ************************************ 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:47.766 ************************************ 00:07:47.766 START TEST nvmf_host_management 00:07:47.766 ************************************ 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:47.766 * Looking for test storage... 00:07:47.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.766 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:47.767 21:32:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:53.053 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:53.053 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:53.053 Found net devices under 0000:86:00.0: cvl_0_0 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:53.053 Found net devices under 0000:86:00.1: cvl_0_1 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:53.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:07:53.053 00:07:53.053 --- 10.0.0.2 ping statistics --- 00:07:53.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.053 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:07:53.053 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:07:53.053 00:07:53.053 --- 10.0.0.1 ping statistics --- 00:07:53.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.054 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2917100 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2917100 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2917100 ']' 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.054 21:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.054 [2024-07-24 21:33:00.852086] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:07:53.054 [2024-07-24 21:33:00.852140] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.054 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.054 [2024-07-24 21:33:00.909707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.054 [2024-07-24 21:33:00.989191] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.054 [2024-07-24 21:33:00.989231] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.054 [2024-07-24 21:33:00.989238] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.054 [2024-07-24 21:33:00.989244] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.054 [2024-07-24 21:33:00.989249] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.054 [2024-07-24 21:33:00.989383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.054 [2024-07-24 21:33:00.989475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.054 [2024-07-24 21:33:00.989582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.054 [2024-07-24 21:33:00.989583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:53.653 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.653 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:53.653 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.654 [2024-07-24 21:33:01.718370] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.654 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.654 Malloc0 00:07:53.913 [2024-07-24 21:33:01.778037] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2917385 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2917385 /var/tmp/bdevperf.sock 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2917385 ']' 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:53.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:53.913 { 00:07:53.913 "params": { 00:07:53.913 "name": "Nvme$subsystem", 00:07:53.913 "trtype": "$TEST_TRANSPORT", 00:07:53.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.913 "adrfam": "ipv4", 00:07:53.913 "trsvcid": "$NVMF_PORT", 00:07:53.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.913 "hdgst": ${hdgst:-false}, 00:07:53.913 "ddgst": ${ddgst:-false} 00:07:53.913 }, 00:07:53.913 "method": "bdev_nvme_attach_controller" 00:07:53.913 } 00:07:53.913 EOF 00:07:53.913 )") 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:53.913 21:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:53.913 "params": { 00:07:53.913 "name": "Nvme0", 00:07:53.913 "trtype": "tcp", 00:07:53.914 "traddr": "10.0.0.2", 00:07:53.914 "adrfam": "ipv4", 00:07:53.914 "trsvcid": "4420", 00:07:53.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.914 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:53.914 "hdgst": false, 00:07:53.914 "ddgst": false 00:07:53.914 }, 00:07:53.914 "method": "bdev_nvme_attach_controller" 00:07:53.914 }' 00:07:53.914 [2024-07-24 21:33:01.870656] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:07:53.914 [2024-07-24 21:33:01.870704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917385 ] 00:07:53.914 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.914 [2024-07-24 21:33:01.926370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.914 [2024-07-24 21:33:02.000404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.483 Running I/O for 10 seconds... 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.744 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.745 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:07:54.745 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:07:54.745 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:54.745 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:54.745 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:54.745 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:54.745 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.745 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.745 [2024-07-24 21:33:02.769292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55580 is same with the state(5) to be set 00:07:54.745 [2024-07-24 21:33:02.769347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55580 is same with the state(5) to be set 00:07:54.745 [2024-07-24 21:33:02.769355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55580 is same with the state(5) to be set 00:07:54.745 [2024-07-24 21:33:02.769361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55580 is same with the state(5) to be set 00:07:54.745 [2024-07-24 21:33:02.769368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55580 is same with the state(5) to be set 00:07:54.745 [2024-07-24 21:33:02.769374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe55580 is same with the state(5) to be set 00:07:54.745 [2024-07-24 21:33:02.773022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:54.745 [2024-07-24 21:33:02.773063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:54.745 [2024-07-24 21:33:02.773080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:54.745 [2024-07-24 21:33:02.773094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:54.745 [2024-07-24 21:33:02.773107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f980 is same with the state(5) to be set 00:07:54.745 [2024-07-24 21:33:02.773790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.773805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.773826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.773842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.773855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.773869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.773884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.773899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.773913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.773931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.773946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.773961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.773975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.773984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.773991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.745 [2024-07-24 21:33:02.774092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.745 [2024-07-24 21:33:02.774220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.745 [2024-07-24 21:33:02.774228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:1 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:54.746 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 [2024-07-24 21:33:02.774750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.746 [2024-07-24 21:33:02.774756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.746 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.746 [2024-07-24 21:33:02.774815] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21b1660 was disconnected and freed. reset controller. 00:07:54.746 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.746 [2024-07-24 21:33:02.775731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:54.746 task offset: 57344 on job bdev=Nvme0n1 fails 00:07:54.746 00:07:54.746 Latency(us) 00:07:54.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.747 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:54.747 Job: Nvme0n1 ended in about 0.44 seconds with error 00:07:54.747 Verification LBA range: start 0x0 length 0x400 00:07:54.747 Nvme0n1 : 0.44 1017.59 63.60 145.37 0.00 53812.95 1303.60 58811.44 00:07:54.747 =================================================================================================================== 00:07:54.747 Total : 1017.59 63.60 145.37 0.00 53812.95 1303.60 58811.44 00:07:54.747 [2024-07-24 21:33:02.777360] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:54.747 [2024-07-24 21:33:02.777374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7f980 (9): Bad file descriptor 00:07:54.747 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.747 21:33:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:54.747 [2024-07-24 21:33:02.826901] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:55.686 21:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2917385 00:07:55.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2917385) - No such process 00:07:55.686 21:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:55.686 21:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:55.686 21:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:55.686 21:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:55.686 21:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:55.686 21:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:55.686 21:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:55.686 21:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:55.686 { 00:07:55.686 "params": { 00:07:55.686 "name": "Nvme$subsystem", 00:07:55.686 "trtype": "$TEST_TRANSPORT", 00:07:55.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.686 "adrfam": "ipv4", 00:07:55.686 "trsvcid": "$NVMF_PORT", 00:07:55.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.686 "hdgst": ${hdgst:-false}, 00:07:55.686 "ddgst": ${ddgst:-false} 00:07:55.686 }, 00:07:55.686 "method": "bdev_nvme_attach_controller" 00:07:55.686 } 00:07:55.686 EOF 00:07:55.686 )") 00:07:55.686 21:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:55.686 21:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:55.946 21:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:55.946 21:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:55.946 "params": { 00:07:55.946 "name": "Nvme0", 00:07:55.946 "trtype": "tcp", 00:07:55.946 "traddr": "10.0.0.2", 00:07:55.946 "adrfam": "ipv4", 00:07:55.946 "trsvcid": "4420", 00:07:55.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:55.946 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:55.946 "hdgst": false, 00:07:55.946 "ddgst": false 00:07:55.946 }, 00:07:55.946 "method": "bdev_nvme_attach_controller" 00:07:55.946 }' 00:07:55.946 [2024-07-24 21:33:03.834373] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:07:55.946 [2024-07-24 21:33:03.834420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917699 ] 00:07:55.946 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.946 [2024-07-24 21:33:03.890074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.946 [2024-07-24 21:33:03.962180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.515 Running I/O for 1 seconds... 00:07:57.456 00:07:57.456 Latency(us) 00:07:57.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.456 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:57.456 Verification LBA range: start 0x0 length 0x400 00:07:57.456 Nvme0n1 : 1.02 1006.05 62.88 0.00 0.00 62831.89 13164.19 59267.34 00:07:57.456 =================================================================================================================== 00:07:57.456 Total : 1006.05 62.88 0.00 0.00 62831.89 13164.19 59267.34 00:07:57.456 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:57.456 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:57.456 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:57.456 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:57.456 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:57.456 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:57.456 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:57.456 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:57.456 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:57.456 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:57.456 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:57.456 rmmod nvme_tcp 00:07:57.456 rmmod nvme_fabrics 00:07:57.456 rmmod nvme_keyring 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2917100 ']' 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2917100 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2917100 ']' 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2917100 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2917100 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2917100' 00:07:57.716 killing process with pid 2917100 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2917100 00:07:57.716 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2917100 00:07:57.716 [2024-07-24 21:33:05.819751] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:57.977 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:57.977 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:57.977 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:57.977 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:57.977 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:57.977 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.977 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.977 21:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.889 21:33:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:59.889 21:33:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:59.889 00:07:59.889 real 0m12.469s 00:07:59.889 user 0m23.516s 00:07:59.889 sys 0m5.022s 00:07:59.889 21:33:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.889 21:33:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.889 ************************************ 00:07:59.889 END TEST nvmf_host_management 00:07:59.889 ************************************ 00:07:59.889 21:33:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:59.889 21:33:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:59.889 21:33:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.889 21:33:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.889 ************************************ 00:07:59.889 START TEST nvmf_lvol 00:07:59.889 ************************************ 00:07:59.889 21:33:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:00.149 * Looking for test storage... 00:08:00.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.149 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.150 21:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:05.441 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:05.441 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:05.441 Found net devices under 0000:86:00.0: cvl_0_0 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:05.441 Found net devices under 0000:86:00.1: cvl_0_1 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:05.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:08:05.441 00:08:05.441 --- 10.0.0.2 ping statistics --- 00:08:05.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.441 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.424 ms 00:08:05.441 00:08:05.441 --- 10.0.0.1 ping statistics --- 00:08:05.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.441 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2921846 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2921846 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2921846 ']' 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.441 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.442 [2024-07-24 21:33:13.365478] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:08:05.442 [2024-07-24 21:33:13.365527] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.442 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.442 [2024-07-24 21:33:13.419755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.442 [2024-07-24 21:33:13.494075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.442 [2024-07-24 21:33:13.494114] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.442 [2024-07-24 21:33:13.494121] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.442 [2024-07-24 21:33:13.494127] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.442 [2024-07-24 21:33:13.494133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.442 [2024-07-24 21:33:13.494174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.442 [2024-07-24 21:33:13.494273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.442 [2024-07-24 21:33:13.494275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.701 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:05.701 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:05.701 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:05.701 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:05.701 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.701 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.701 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:05.701 [2024-07-24 21:33:13.784781] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.701 21:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:05.960 21:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:05.960 21:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:06.220 21:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:06.220 21:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:06.480 21:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:06.480 21:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=92658d6a-55be-483b-a744-260bd2b3e0a5 00:08:06.480 21:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 92658d6a-55be-483b-a744-260bd2b3e0a5 lvol 20 00:08:06.740 21:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=482c4815-6d73-4266-a751-9db2770f01dd 00:08:06.740 21:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:06.999 21:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 482c4815-6d73-4266-a751-9db2770f01dd 00:08:07.258 21:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:07.258 [2024-07-24 21:33:15.289271] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.258 21:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.517 21:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2922207 00:08:07.517 21:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:07.517 21:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:07.517 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.455 21:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 482c4815-6d73-4266-a751-9db2770f01dd MY_SNAPSHOT 00:08:08.715 21:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f33284df-1ff6-4139-806e-368c318e3dce 00:08:08.715 21:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 482c4815-6d73-4266-a751-9db2770f01dd 30 00:08:08.975 21:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f33284df-1ff6-4139-806e-368c318e3dce MY_CLONE 00:08:09.234 21:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6a04bef6-c5d6-47f9-951e-58e55ae7e79e 00:08:09.234 21:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6a04bef6-c5d6-47f9-951e-58e55ae7e79e 00:08:09.493 21:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2922207 00:08:19.517 Initializing NVMe Controllers 00:08:19.517 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:19.517 Controller IO queue size 128, less than required. 00:08:19.517 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:19.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:19.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:19.517 Initialization complete. Launching workers. 00:08:19.517 ======================================================== 00:08:19.517 Latency(us) 00:08:19.517 Device Information : IOPS MiB/s Average min max 00:08:19.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11962.79 46.73 10705.34 1777.89 64597.72 00:08:19.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11762.09 45.95 10884.66 3421.38 63693.01 00:08:19.517 ======================================================== 00:08:19.517 Total : 23724.89 92.68 10794.24 1777.89 64597.72 00:08:19.517 00:08:19.517 21:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 482c4815-6d73-4266-a751-9db2770f01dd 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 92658d6a-55be-483b-a744-260bd2b3e0a5 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.517 rmmod nvme_tcp 00:08:19.517 rmmod nvme_fabrics 00:08:19.517 rmmod nvme_keyring 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2921846 ']' 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2921846 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2921846 ']' 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2921846 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2921846 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2921846' 00:08:19.517 killing process with pid 2921846 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2921846 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2921846 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.517 21:33:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.900 21:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:20.900 00:08:20.900 real 0m20.979s 00:08:20.900 user 1m2.696s 00:08:20.900 sys 0m6.494s 00:08:20.900 21:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.900 21:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:20.900 ************************************ 00:08:20.900 END TEST nvmf_lvol 00:08:20.900 ************************************ 00:08:20.900 21:33:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:20.900 21:33:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:20.900 21:33:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.900 21:33:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.900 ************************************ 00:08:20.900 START TEST nvmf_lvs_grow 00:08:20.900 ************************************ 00:08:20.900 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:21.162 * Looking for test storage... 00:08:21.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:21.162 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:21.163 21:33:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:26.454 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:26.454 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:26.454 Found net devices under 0000:86:00.0: cvl_0_0 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:26.454 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:26.455 Found net devices under 0000:86:00.1: cvl_0_1 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.455 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:26.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:08:26.715 00:08:26.715 --- 10.0.0.2 ping statistics --- 00:08:26.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.715 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:08:26.715 00:08:26.715 --- 10.0.0.1 ping statistics --- 00:08:26.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.715 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2927492 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2927492 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2927492 ']' 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:26.715 21:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.715 [2024-07-24 21:33:34.815742] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:08:26.715 [2024-07-24 21:33:34.815786] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.975 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.975 [2024-07-24 21:33:34.874902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.975 [2024-07-24 21:33:34.949626] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.975 [2024-07-24 21:33:34.949660] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.975 [2024-07-24 21:33:34.949667] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.975 [2024-07-24 21:33:34.949673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.975 [2024-07-24 21:33:34.949679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.975 [2024-07-24 21:33:34.949697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.544 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.544 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:27.544 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:27.544 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:27.544 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.544 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.544 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:27.804 [2024-07-24 21:33:35.793977] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.804 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:27.804 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:27.804 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.804 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.804 ************************************ 00:08:27.804 START TEST lvs_grow_clean 00:08:27.804 ************************************ 00:08:27.804 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:27.804 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:27.804 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:27.804 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:27.804 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:27.804 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:27.804 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:27.804 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.804 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.804 21:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:28.064 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:28.064 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:28.324 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=df85863d-9e53-4dd3-84ae-01fc7d8133f2 00:08:28.324 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df85863d-9e53-4dd3-84ae-01fc7d8133f2 00:08:28.324 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:28.324 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:28.324 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:28.324 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u df85863d-9e53-4dd3-84ae-01fc7d8133f2 lvol 150 00:08:28.583 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4e48835e-d980-4775-94de-3139ee128bfe 00:08:28.583 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:28.583 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:28.843 [2024-07-24 21:33:36.730752] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:28.843 [2024-07-24 21:33:36.730803] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:28.843 true 00:08:28.843 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df85863d-9e53-4dd3-84ae-01fc7d8133f2 00:08:28.843 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:28.843 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:28.843 21:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:29.104 21:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4e48835e-d980-4775-94de-3139ee128bfe 00:08:29.363 21:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:29.363 [2024-07-24 21:33:37.416800] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.363 21:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:29.624 21:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2928000 00:08:29.624 21:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:29.624 21:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2928000 /var/tmp/bdevperf.sock 00:08:29.624 21:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2928000 ']' 00:08:29.624 21:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:29.624 21:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.624 21:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:29.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:29.624 21:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.624 21:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:29.624 21:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:29.624 [2024-07-24 21:33:37.630425] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:08:29.624 [2024-07-24 21:33:37.630471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2928000 ] 00:08:29.624 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.624 [2024-07-24 21:33:37.685754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.883 [2024-07-24 21:33:37.760003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.454 21:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.454 21:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:30.454 21:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:30.713 Nvme0n1 00:08:30.713 21:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:30.973 [ 00:08:30.973 { 00:08:30.973 "name": "Nvme0n1", 00:08:30.973 "aliases": [ 00:08:30.973 "4e48835e-d980-4775-94de-3139ee128bfe" 00:08:30.973 ], 00:08:30.973 "product_name": "NVMe disk", 00:08:30.973 "block_size": 4096, 00:08:30.973 "num_blocks": 38912, 00:08:30.973 "uuid": "4e48835e-d980-4775-94de-3139ee128bfe", 00:08:30.973 "assigned_rate_limits": { 00:08:30.973 "rw_ios_per_sec": 0, 00:08:30.973 "rw_mbytes_per_sec": 0, 00:08:30.973 "r_mbytes_per_sec": 0, 00:08:30.973 "w_mbytes_per_sec": 0 00:08:30.973 }, 00:08:30.973 "claimed": false, 00:08:30.973 "zoned": false, 00:08:30.973 "supported_io_types": { 00:08:30.973 "read": true, 00:08:30.973 "write": true, 00:08:30.973 "unmap": true, 00:08:30.973 "flush": true, 00:08:30.973 "reset": true, 00:08:30.973 "nvme_admin": true, 00:08:30.973 "nvme_io": true, 00:08:30.974 "nvme_io_md": false, 00:08:30.974 "write_zeroes": true, 00:08:30.974 "zcopy": false, 00:08:30.974 "get_zone_info": false, 00:08:30.974 "zone_management": false, 00:08:30.974 "zone_append": false, 00:08:30.974 "compare": true, 00:08:30.974 "compare_and_write": true, 00:08:30.974 "abort": true, 00:08:30.974 "seek_hole": false, 00:08:30.974 "seek_data": false, 00:08:30.974 "copy": true, 00:08:30.974 "nvme_iov_md": false 00:08:30.974 }, 00:08:30.974 "memory_domains": [ 00:08:30.974 { 00:08:30.974 "dma_device_id": "system", 00:08:30.974 "dma_device_type": 1 00:08:30.974 } 00:08:30.974 ], 00:08:30.974 "driver_specific": { 00:08:30.974 "nvme": [ 00:08:30.974 { 00:08:30.974 "trid": { 00:08:30.974 "trtype": "TCP", 00:08:30.974 "adrfam": "IPv4", 00:08:30.974 "traddr": "10.0.0.2", 00:08:30.974 "trsvcid": "4420", 00:08:30.974 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:30.974 }, 00:08:30.974 "ctrlr_data": { 00:08:30.974 "cntlid": 1, 00:08:30.974 "vendor_id": "0x8086", 00:08:30.974 "model_number": "SPDK bdev Controller", 00:08:30.974 "serial_number": "SPDK0", 00:08:30.974 "firmware_revision": "24.09", 00:08:30.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:30.974 "oacs": { 00:08:30.974 "security": 0, 00:08:30.974 "format": 0, 00:08:30.974 "firmware": 0, 00:08:30.974 "ns_manage": 0 00:08:30.974 }, 00:08:30.974 "multi_ctrlr": true, 00:08:30.974 "ana_reporting": false 00:08:30.974 }, 00:08:30.974 "vs": { 00:08:30.974 "nvme_version": "1.3" 00:08:30.974 }, 00:08:30.974 "ns_data": { 00:08:30.974 "id": 1, 00:08:30.974 "can_share": true 00:08:30.974 } 00:08:30.974 } 00:08:30.974 ], 00:08:30.974 "mp_policy": "active_passive" 00:08:30.974 } 00:08:30.974 } 00:08:30.974 ] 00:08:30.974 21:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2928232 00:08:30.974 21:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:30.974 21:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:30.974 Running I/O for 10 seconds... 00:08:31.912 Latency(us) 00:08:31.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.912 Nvme0n1 : 1.00 22215.00 86.78 0.00 0.00 0.00 0.00 0.00 00:08:31.912 =================================================================================================================== 00:08:31.912 Total : 22215.00 86.78 0.00 0.00 0.00 0.00 0.00 00:08:31.912 00:08:32.851 21:33:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u df85863d-9e53-4dd3-84ae-01fc7d8133f2 00:08:32.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.851 Nvme0n1 : 2.00 22444.50 87.67 0.00 0.00 0.00 0.00 0.00 00:08:32.851 =================================================================================================================== 00:08:32.851 Total : 22444.50 87.67 0.00 0.00 0.00 0.00 0.00 00:08:32.851 00:08:33.111 true 00:08:33.111 21:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df85863d-9e53-4dd3-84ae-01fc7d8133f2 00:08:33.111 21:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:33.111 21:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:33.111 21:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:33.111 21:33:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2928232 00:08:34.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.048 Nvme0n1 : 3.00 22472.67 87.78 0.00 0.00 0.00 0.00 0.00 00:08:34.048 =================================================================================================================== 00:08:34.048 Total : 22472.67 87.78 0.00 0.00 0.00 0.00 0.00 00:08:34.048 00:08:34.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.986 Nvme0n1 : 4.00 22699.25 88.67 0.00 0.00 0.00 0.00 0.00 00:08:34.986 =================================================================================================================== 00:08:34.986 Total : 22699.25 88.67 0.00 0.00 0.00 0.00 0.00 00:08:34.986 00:08:35.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.936 Nvme0n1 : 5.00 22743.20 88.84 0.00 0.00 0.00 0.00 0.00 00:08:35.936 =================================================================================================================== 00:08:35.936 Total : 22743.20 88.84 0.00 0.00 0.00 0.00 0.00 00:08:35.936 00:08:36.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.873 Nvme0n1 : 6.00 22722.33 88.76 0.00 0.00 0.00 0.00 0.00 00:08:36.873 =================================================================================================================== 00:08:36.873 Total : 22722.33 88.76 0.00 0.00 0.00 0.00 0.00 00:08:36.873 00:08:38.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.251 Nvme0n1 : 7.00 22671.00 88.56 0.00 0.00 0.00 0.00 0.00 00:08:38.251 =================================================================================================================== 00:08:38.251 Total : 22671.00 88.56 0.00 0.00 0.00 0.00 0.00 00:08:38.251 00:08:39.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.189 Nvme0n1 : 8.00 22732.00 88.80 0.00 0.00 0.00 0.00 0.00 00:08:39.189 =================================================================================================================== 00:08:39.189 Total : 22732.00 88.80 0.00 0.00 0.00 0.00 0.00 00:08:39.189 00:08:40.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.127 Nvme0n1 : 9.00 22702.78 88.68 0.00 0.00 0.00 0.00 0.00 00:08:40.127 =================================================================================================================== 00:08:40.127 Total : 22702.78 88.68 0.00 0.00 0.00 0.00 0.00 00:08:40.127 00:08:41.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.065 Nvme0n1 : 10.00 22668.60 88.55 0.00 0.00 0.00 0.00 0.00 00:08:41.065 =================================================================================================================== 00:08:41.065 Total : 22668.60 88.55 0.00 0.00 0.00 0.00 0.00 00:08:41.065 00:08:41.065 00:08:41.065 Latency(us) 00:08:41.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.065 Nvme0n1 : 10.01 22668.00 88.55 0.00 0.00 5642.61 2649.93 31001.38 00:08:41.065 =================================================================================================================== 00:08:41.065 Total : 22668.00 88.55 0.00 0.00 5642.61 2649.93 31001.38 00:08:41.065 0 00:08:41.065 21:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2928000 00:08:41.065 21:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2928000 ']' 00:08:41.065 21:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2928000 00:08:41.065 21:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:41.065 21:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:41.065 21:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2928000 00:08:41.065 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:41.065 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:41.065 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2928000' 00:08:41.065 killing process with pid 2928000 00:08:41.065 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2928000 00:08:41.065 Received shutdown signal, test time was about 10.000000 seconds 00:08:41.065 00:08:41.065 Latency(us) 00:08:41.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.065 =================================================================================================================== 00:08:41.065 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:41.065 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2928000 00:08:41.324 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.324 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:41.584 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df85863d-9e53-4dd3-84ae-01fc7d8133f2 00:08:41.584 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:41.844 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:41.844 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:41.844 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.844 [2024-07-24 21:33:49.883619] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:41.844 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df85863d-9e53-4dd3-84ae-01fc7d8133f2 00:08:41.844 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:41.844 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df85863d-9e53-4dd3-84ae-01fc7d8133f2 00:08:41.844 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.844 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:41.844 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.844 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:41.844 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.844 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:41.844 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.844 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:41.845 21:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df85863d-9e53-4dd3-84ae-01fc7d8133f2 00:08:42.104 request: 00:08:42.104 { 00:08:42.104 "uuid": "df85863d-9e53-4dd3-84ae-01fc7d8133f2", 00:08:42.104 "method": "bdev_lvol_get_lvstores", 00:08:42.104 "req_id": 1 00:08:42.104 } 00:08:42.104 Got JSON-RPC error response 00:08:42.104 response: 00:08:42.104 { 00:08:42.104 "code": -19, 00:08:42.104 "message": "No such device" 00:08:42.104 } 00:08:42.104 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:42.104 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:42.104 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:42.104 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:42.104 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.363 aio_bdev 00:08:42.363 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4e48835e-d980-4775-94de-3139ee128bfe 00:08:42.363 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=4e48835e-d980-4775-94de-3139ee128bfe 00:08:42.363 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:42.364 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:42.364 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:42.364 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:42.364 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:42.364 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4e48835e-d980-4775-94de-3139ee128bfe -t 2000 00:08:42.623 [ 00:08:42.623 { 00:08:42.623 "name": "4e48835e-d980-4775-94de-3139ee128bfe", 00:08:42.623 "aliases": [ 00:08:42.623 "lvs/lvol" 00:08:42.623 ], 00:08:42.623 "product_name": "Logical Volume", 00:08:42.623 "block_size": 4096, 00:08:42.623 "num_blocks": 38912, 00:08:42.623 "uuid": "4e48835e-d980-4775-94de-3139ee128bfe", 00:08:42.623 "assigned_rate_limits": { 00:08:42.623 "rw_ios_per_sec": 0, 00:08:42.623 "rw_mbytes_per_sec": 0, 00:08:42.623 "r_mbytes_per_sec": 0, 00:08:42.623 "w_mbytes_per_sec": 0 00:08:42.623 }, 00:08:42.623 "claimed": false, 00:08:42.623 "zoned": false, 00:08:42.623 "supported_io_types": { 00:08:42.623 "read": true, 00:08:42.623 "write": true, 00:08:42.623 "unmap": true, 00:08:42.623 "flush": false, 00:08:42.623 "reset": true, 00:08:42.623 "nvme_admin": false, 00:08:42.623 "nvme_io": false, 00:08:42.623 "nvme_io_md": false, 00:08:42.623 "write_zeroes": true, 00:08:42.623 "zcopy": false, 00:08:42.623 "get_zone_info": false, 00:08:42.623 "zone_management": false, 00:08:42.623 "zone_append": false, 00:08:42.623 "compare": false, 00:08:42.623 "compare_and_write": false, 00:08:42.623 "abort": false, 00:08:42.623 "seek_hole": true, 00:08:42.623 "seek_data": true, 00:08:42.623 "copy": false, 00:08:42.623 "nvme_iov_md": false 00:08:42.623 }, 00:08:42.623 "driver_specific": { 00:08:42.623 "lvol": { 00:08:42.623 "lvol_store_uuid": "df85863d-9e53-4dd3-84ae-01fc7d8133f2", 00:08:42.623 "base_bdev": "aio_bdev", 00:08:42.623 "thin_provision": false, 00:08:42.623 "num_allocated_clusters": 38, 00:08:42.623 "snapshot": false, 00:08:42.623 "clone": false, 00:08:42.623 "esnap_clone": false 00:08:42.623 } 00:08:42.623 } 00:08:42.623 } 00:08:42.623 ] 00:08:42.623 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:42.623 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df85863d-9e53-4dd3-84ae-01fc7d8133f2 00:08:42.623 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:42.883 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:42.883 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df85863d-9e53-4dd3-84ae-01fc7d8133f2 00:08:42.883 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:42.883 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:42.883 21:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4e48835e-d980-4775-94de-3139ee128bfe 00:08:43.143 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u df85863d-9e53-4dd3-84ae-01fc7d8133f2 00:08:43.403 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:43.403 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.663 00:08:43.663 real 0m15.681s 00:08:43.663 user 0m15.282s 00:08:43.663 sys 0m1.508s 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:43.663 ************************************ 00:08:43.663 END TEST lvs_grow_clean 00:08:43.663 ************************************ 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:43.663 ************************************ 00:08:43.663 START TEST lvs_grow_dirty 00:08:43.663 ************************************ 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.663 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:43.923 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:43.923 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:43.923 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=590360f9-aa9c-4436-91e7-6a9a13d99de8 00:08:43.923 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 590360f9-aa9c-4436-91e7-6a9a13d99de8 00:08:43.923 21:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:44.183 21:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:44.183 21:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:44.183 21:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 590360f9-aa9c-4436-91e7-6a9a13d99de8 lvol 150 00:08:44.445 21:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0ad8126c-589a-4940-84b3-4e5f99c2b58c 00:08:44.445 21:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.445 21:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:44.445 [2024-07-24 21:33:52.479332] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:44.445 [2024-07-24 21:33:52.479385] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:44.445 true 00:08:44.445 21:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 590360f9-aa9c-4436-91e7-6a9a13d99de8 00:08:44.445 21:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:44.705 21:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:44.705 21:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:44.964 21:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0ad8126c-589a-4940-84b3-4e5f99c2b58c 00:08:44.965 21:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:45.225 [2024-07-24 21:33:53.165367] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.225 21:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.225 21:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2930815 00:08:45.225 21:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:45.225 21:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2930815 /var/tmp/bdevperf.sock 00:08:45.225 21:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:45.225 21:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2930815 ']' 00:08:45.225 21:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:45.225 21:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.225 21:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:45.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:45.225 21:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.225 21:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.484 [2024-07-24 21:33:53.378917] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:08:45.484 [2024-07-24 21:33:53.378965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2930815 ] 00:08:45.484 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.484 [2024-07-24 21:33:53.432200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.484 [2024-07-24 21:33:53.504499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.421 21:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.421 21:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:46.421 21:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:46.421 Nvme0n1 00:08:46.421 21:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:46.681 [ 00:08:46.681 { 00:08:46.681 "name": "Nvme0n1", 00:08:46.681 "aliases": [ 00:08:46.681 "0ad8126c-589a-4940-84b3-4e5f99c2b58c" 00:08:46.681 ], 00:08:46.681 "product_name": "NVMe disk", 00:08:46.681 "block_size": 4096, 00:08:46.681 "num_blocks": 38912, 00:08:46.681 "uuid": "0ad8126c-589a-4940-84b3-4e5f99c2b58c", 00:08:46.681 "assigned_rate_limits": { 00:08:46.681 "rw_ios_per_sec": 0, 00:08:46.681 "rw_mbytes_per_sec": 0, 00:08:46.681 "r_mbytes_per_sec": 0, 00:08:46.681 "w_mbytes_per_sec": 0 00:08:46.681 }, 00:08:46.681 "claimed": false, 00:08:46.681 "zoned": false, 00:08:46.681 "supported_io_types": { 00:08:46.681 "read": true, 00:08:46.681 "write": true, 00:08:46.681 "unmap": true, 00:08:46.681 "flush": true, 00:08:46.681 "reset": true, 00:08:46.681 "nvme_admin": true, 00:08:46.681 "nvme_io": true, 00:08:46.681 "nvme_io_md": false, 00:08:46.681 "write_zeroes": true, 00:08:46.681 "zcopy": false, 00:08:46.681 "get_zone_info": false, 00:08:46.681 "zone_management": false, 00:08:46.681 "zone_append": false, 00:08:46.681 "compare": true, 00:08:46.681 "compare_and_write": true, 00:08:46.681 "abort": true, 00:08:46.681 "seek_hole": false, 00:08:46.681 "seek_data": false, 00:08:46.681 "copy": true, 00:08:46.681 "nvme_iov_md": false 00:08:46.681 }, 00:08:46.681 "memory_domains": [ 00:08:46.681 { 00:08:46.681 "dma_device_id": "system", 00:08:46.681 "dma_device_type": 1 00:08:46.681 } 00:08:46.681 ], 00:08:46.681 "driver_specific": { 00:08:46.681 "nvme": [ 00:08:46.681 { 00:08:46.681 "trid": { 00:08:46.681 "trtype": "TCP", 00:08:46.681 "adrfam": "IPv4", 00:08:46.681 "traddr": "10.0.0.2", 00:08:46.681 "trsvcid": "4420", 00:08:46.681 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:46.681 }, 00:08:46.681 "ctrlr_data": { 00:08:46.681 "cntlid": 1, 00:08:46.681 "vendor_id": "0x8086", 00:08:46.681 "model_number": "SPDK bdev Controller", 00:08:46.681 "serial_number": "SPDK0", 00:08:46.681 "firmware_revision": "24.09", 00:08:46.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:46.681 "oacs": { 00:08:46.681 "security": 0, 00:08:46.681 "format": 0, 00:08:46.681 "firmware": 0, 00:08:46.681 "ns_manage": 0 00:08:46.681 }, 00:08:46.681 "multi_ctrlr": true, 00:08:46.681 "ana_reporting": false 00:08:46.681 }, 00:08:46.681 "vs": { 00:08:46.681 "nvme_version": "1.3" 00:08:46.681 }, 00:08:46.681 "ns_data": { 00:08:46.681 "id": 1, 00:08:46.681 "can_share": true 00:08:46.681 } 00:08:46.681 } 00:08:46.681 ], 00:08:46.681 "mp_policy": "active_passive" 00:08:46.681 } 00:08:46.681 } 00:08:46.681 ] 00:08:46.681 21:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2931047 00:08:46.681 21:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:46.681 21:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:46.681 Running I/O for 10 seconds... 00:08:47.618 Latency(us) 00:08:47.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.618 Nvme0n1 : 1.00 21782.00 85.09 0.00 0.00 0.00 0.00 0.00 00:08:47.618 =================================================================================================================== 00:08:47.618 Total : 21782.00 85.09 0.00 0.00 0.00 0.00 0.00 00:08:47.618 00:08:48.589 21:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 590360f9-aa9c-4436-91e7-6a9a13d99de8 00:08:48.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.848 Nvme0n1 : 2.00 22261.00 86.96 0.00 0.00 0.00 0.00 0.00 00:08:48.848 =================================================================================================================== 00:08:48.848 Total : 22261.00 86.96 0.00 0.00 0.00 0.00 0.00 00:08:48.848 00:08:48.848 true 00:08:48.848 21:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 590360f9-aa9c-4436-91e7-6a9a13d99de8 00:08:48.848 21:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:49.107 21:33:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:49.107 21:33:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:49.107 21:33:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2931047 00:08:49.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.675 Nvme0n1 : 3.00 22460.33 87.74 0.00 0.00 0.00 0.00 0.00 00:08:49.675 =================================================================================================================== 00:08:49.675 Total : 22460.33 87.74 0.00 0.00 0.00 0.00 0.00 00:08:49.675 00:08:50.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.614 Nvme0n1 : 4.00 22486.50 87.84 0.00 0.00 0.00 0.00 0.00 00:08:50.614 =================================================================================================================== 00:08:50.614 Total : 22486.50 87.84 0.00 0.00 0.00 0.00 0.00 00:08:50.614 00:08:51.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.994 Nvme0n1 : 5.00 22473.00 87.79 0.00 0.00 0.00 0.00 0.00 00:08:51.994 =================================================================================================================== 00:08:51.994 Total : 22473.00 87.79 0.00 0.00 0.00 0.00 0.00 00:08:51.994 00:08:52.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.933 Nvme0n1 : 6.00 22437.50 87.65 0.00 0.00 0.00 0.00 0.00 00:08:52.933 =================================================================================================================== 00:08:52.933 Total : 22437.50 87.65 0.00 0.00 0.00 0.00 0.00 00:08:52.933 00:08:53.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.872 Nvme0n1 : 7.00 22429.14 87.61 0.00 0.00 0.00 0.00 0.00 00:08:53.872 =================================================================================================================== 00:08:53.872 Total : 22429.14 87.61 0.00 0.00 0.00 0.00 0.00 00:08:53.872 00:08:54.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.807 Nvme0n1 : 8.00 22421.00 87.58 0.00 0.00 0.00 0.00 0.00 00:08:54.807 =================================================================================================================== 00:08:54.807 Total : 22421.00 87.58 0.00 0.00 0.00 0.00 0.00 00:08:54.807 00:08:55.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.742 Nvme0n1 : 9.00 22430.33 87.62 0.00 0.00 0.00 0.00 0.00 00:08:55.742 =================================================================================================================== 00:08:55.742 Total : 22430.33 87.62 0.00 0.00 0.00 0.00 0.00 00:08:55.742 00:08:56.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.679 Nvme0n1 : 10.00 22434.70 87.64 0.00 0.00 0.00 0.00 0.00 00:08:56.679 =================================================================================================================== 00:08:56.679 Total : 22434.70 87.64 0.00 0.00 0.00 0.00 0.00 00:08:56.679 00:08:56.679 00:08:56.679 Latency(us) 00:08:56.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.679 Nvme0n1 : 10.01 22434.21 87.63 0.00 0.00 5701.49 3063.10 26784.28 00:08:56.679 =================================================================================================================== 00:08:56.679 Total : 22434.21 87.63 0.00 0.00 5701.49 3063.10 26784.28 00:08:56.679 0 00:08:56.679 21:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2930815 00:08:56.679 21:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2930815 ']' 00:08:56.679 21:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2930815 00:08:56.679 21:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:56.679 21:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:56.679 21:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2930815 00:08:56.939 21:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:56.939 21:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:56.939 21:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2930815' 00:08:56.939 killing process with pid 2930815 00:08:56.939 21:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2930815 00:08:56.939 Received shutdown signal, test time was about 10.000000 seconds 00:08:56.939 00:08:56.939 Latency(us) 00:08:56.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.939 =================================================================================================================== 00:08:56.939 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:56.939 21:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2930815 00:08:56.939 21:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:57.198 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:57.458 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 590360f9-aa9c-4436-91e7-6a9a13d99de8 00:08:57.458 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:57.458 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:57.458 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:57.458 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2927492 00:08:57.458 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2927492 00:08:57.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2927492 Killed "${NVMF_APP[@]}" "$@" 00:08:57.719 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:57.719 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:57.719 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:57.719 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:57.719 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:57.719 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2932863 00:08:57.719 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2932863 00:08:57.719 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:57.719 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2932863 ']' 00:08:57.719 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.719 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.719 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.719 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.719 21:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:57.719 [2024-07-24 21:34:05.638082] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:08:57.719 [2024-07-24 21:34:05.638130] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.719 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.719 [2024-07-24 21:34:05.696291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.719 [2024-07-24 21:34:05.774533] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.719 [2024-07-24 21:34:05.774567] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.719 [2024-07-24 21:34:05.774574] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.719 [2024-07-24 21:34:05.774580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.719 [2024-07-24 21:34:05.774585] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.719 [2024-07-24 21:34:05.774601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.661 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.661 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:58.662 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.662 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.662 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.662 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.662 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:58.662 [2024-07-24 21:34:06.631101] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:58.662 [2024-07-24 21:34:06.631177] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:58.662 [2024-07-24 21:34:06.631201] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:58.662 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:58.662 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0ad8126c-589a-4940-84b3-4e5f99c2b58c 00:08:58.662 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=0ad8126c-589a-4940-84b3-4e5f99c2b58c 00:08:58.662 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:58.662 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:58.662 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:58.662 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:58.662 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:58.922 21:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0ad8126c-589a-4940-84b3-4e5f99c2b58c -t 2000 00:08:58.922 [ 00:08:58.922 { 00:08:58.922 "name": "0ad8126c-589a-4940-84b3-4e5f99c2b58c", 00:08:58.922 "aliases": [ 00:08:58.922 "lvs/lvol" 00:08:58.922 ], 00:08:58.922 "product_name": "Logical Volume", 00:08:58.922 "block_size": 4096, 00:08:58.922 "num_blocks": 38912, 00:08:58.922 "uuid": "0ad8126c-589a-4940-84b3-4e5f99c2b58c", 00:08:58.922 "assigned_rate_limits": { 00:08:58.922 "rw_ios_per_sec": 0, 00:08:58.922 "rw_mbytes_per_sec": 0, 00:08:58.922 "r_mbytes_per_sec": 0, 00:08:58.922 "w_mbytes_per_sec": 0 00:08:58.922 }, 00:08:58.922 "claimed": false, 00:08:58.922 "zoned": false, 00:08:58.922 "supported_io_types": { 00:08:58.922 "read": true, 00:08:58.922 "write": true, 00:08:58.922 "unmap": true, 00:08:58.922 "flush": false, 00:08:58.922 "reset": true, 00:08:58.922 "nvme_admin": false, 00:08:58.922 "nvme_io": false, 00:08:58.922 "nvme_io_md": false, 00:08:58.922 "write_zeroes": true, 00:08:58.922 "zcopy": false, 00:08:58.922 "get_zone_info": false, 00:08:58.922 "zone_management": false, 00:08:58.922 "zone_append": false, 00:08:58.922 "compare": false, 00:08:58.922 "compare_and_write": false, 00:08:58.922 "abort": false, 00:08:58.922 "seek_hole": true, 00:08:58.922 "seek_data": true, 00:08:58.922 "copy": false, 00:08:58.922 "nvme_iov_md": false 00:08:58.922 }, 00:08:58.922 "driver_specific": { 00:08:58.922 "lvol": { 00:08:58.922 "lvol_store_uuid": "590360f9-aa9c-4436-91e7-6a9a13d99de8", 00:08:58.922 "base_bdev": "aio_bdev", 00:08:58.922 "thin_provision": false, 00:08:58.922 "num_allocated_clusters": 38, 00:08:58.922 "snapshot": false, 00:08:58.922 "clone": false, 00:08:58.922 "esnap_clone": false 00:08:58.922 } 00:08:58.922 } 00:08:58.922 } 00:08:58.922 ] 00:08:58.922 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:58.922 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 590360f9-aa9c-4436-91e7-6a9a13d99de8 00:08:58.922 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:59.182 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:59.183 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 590360f9-aa9c-4436-91e7-6a9a13d99de8 00:08:59.183 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:59.445 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:59.445 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:59.445 [2024-07-24 21:34:07.491781] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:59.445 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 590360f9-aa9c-4436-91e7-6a9a13d99de8 00:08:59.445 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:59.445 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 590360f9-aa9c-4436-91e7-6a9a13d99de8 00:08:59.445 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.445 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.445 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.445 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.445 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.445 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.445 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.445 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:59.445 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 590360f9-aa9c-4436-91e7-6a9a13d99de8 00:08:59.711 request: 00:08:59.711 { 00:08:59.711 "uuid": "590360f9-aa9c-4436-91e7-6a9a13d99de8", 00:08:59.711 "method": "bdev_lvol_get_lvstores", 00:08:59.711 "req_id": 1 00:08:59.711 } 00:08:59.711 Got JSON-RPC error response 00:08:59.711 response: 00:08:59.711 { 00:08:59.711 "code": -19, 00:08:59.711 "message": "No such device" 00:08:59.711 } 00:08:59.711 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:59.711 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:59.711 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:59.711 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:59.711 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.971 aio_bdev 00:08:59.971 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0ad8126c-589a-4940-84b3-4e5f99c2b58c 00:08:59.971 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=0ad8126c-589a-4940-84b3-4e5f99c2b58c 00:08:59.971 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:59.971 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:59.971 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:59.971 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:59.971 21:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:59.971 21:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0ad8126c-589a-4940-84b3-4e5f99c2b58c -t 2000 00:09:00.231 [ 00:09:00.231 { 00:09:00.231 "name": "0ad8126c-589a-4940-84b3-4e5f99c2b58c", 00:09:00.231 "aliases": [ 00:09:00.231 "lvs/lvol" 00:09:00.231 ], 00:09:00.231 "product_name": "Logical Volume", 00:09:00.231 "block_size": 4096, 00:09:00.231 "num_blocks": 38912, 00:09:00.231 "uuid": "0ad8126c-589a-4940-84b3-4e5f99c2b58c", 00:09:00.231 "assigned_rate_limits": { 00:09:00.231 "rw_ios_per_sec": 0, 00:09:00.231 "rw_mbytes_per_sec": 0, 00:09:00.231 "r_mbytes_per_sec": 0, 00:09:00.231 "w_mbytes_per_sec": 0 00:09:00.231 }, 00:09:00.231 "claimed": false, 00:09:00.231 "zoned": false, 00:09:00.231 "supported_io_types": { 00:09:00.231 "read": true, 00:09:00.231 "write": true, 00:09:00.231 "unmap": true, 00:09:00.231 "flush": false, 00:09:00.231 "reset": true, 00:09:00.231 "nvme_admin": false, 00:09:00.231 "nvme_io": false, 00:09:00.231 "nvme_io_md": false, 00:09:00.231 "write_zeroes": true, 00:09:00.231 "zcopy": false, 00:09:00.231 "get_zone_info": false, 00:09:00.231 "zone_management": false, 00:09:00.231 "zone_append": false, 00:09:00.231 "compare": false, 00:09:00.231 "compare_and_write": false, 00:09:00.231 "abort": false, 00:09:00.231 "seek_hole": true, 00:09:00.231 "seek_data": true, 00:09:00.231 "copy": false, 00:09:00.231 "nvme_iov_md": false 00:09:00.231 }, 00:09:00.231 "driver_specific": { 00:09:00.231 "lvol": { 00:09:00.231 "lvol_store_uuid": "590360f9-aa9c-4436-91e7-6a9a13d99de8", 00:09:00.231 "base_bdev": "aio_bdev", 00:09:00.231 "thin_provision": false, 00:09:00.231 "num_allocated_clusters": 38, 00:09:00.231 "snapshot": false, 00:09:00.231 "clone": false, 00:09:00.231 "esnap_clone": false 00:09:00.231 } 00:09:00.231 } 00:09:00.231 } 00:09:00.231 ] 00:09:00.231 21:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:00.231 21:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 590360f9-aa9c-4436-91e7-6a9a13d99de8 00:09:00.231 21:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:00.491 21:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:00.491 21:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 590360f9-aa9c-4436-91e7-6a9a13d99de8 00:09:00.491 21:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:00.491 21:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:00.491 21:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0ad8126c-589a-4940-84b3-4e5f99c2b58c 00:09:00.751 21:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 590360f9-aa9c-4436-91e7-6a9a13d99de8 00:09:01.011 21:34:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:01.011 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:01.011 00:09:01.011 real 0m17.501s 00:09:01.011 user 0m44.662s 00:09:01.011 sys 0m3.932s 00:09:01.011 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.011 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.011 ************************************ 00:09:01.011 END TEST lvs_grow_dirty 00:09:01.011 ************************************ 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:01.274 nvmf_trace.0 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.274 rmmod nvme_tcp 00:09:01.274 rmmod nvme_fabrics 00:09:01.274 rmmod nvme_keyring 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2932863 ']' 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2932863 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2932863 ']' 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2932863 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2932863 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2932863' 00:09:01.274 killing process with pid 2932863 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2932863 00:09:01.274 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2932863 00:09:01.534 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.534 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:01.534 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:01.534 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.534 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.534 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.534 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.534 21:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.444 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:03.444 00:09:03.444 real 0m42.526s 00:09:03.444 user 1m5.731s 00:09:03.444 sys 0m10.104s 00:09:03.444 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.444 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:03.444 ************************************ 00:09:03.444 END TEST nvmf_lvs_grow 00:09:03.444 ************************************ 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.704 ************************************ 00:09:03.704 START TEST nvmf_bdev_io_wait 00:09:03.704 ************************************ 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:03.704 * Looking for test storage... 00:09:03.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.704 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:03.705 21:34:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:10.281 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:10.281 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:10.281 Found net devices under 0000:86:00.0: cvl_0_0 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.281 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:10.282 Found net devices under 0000:86:00.1: cvl_0_1 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:10.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:09:10.282 00:09:10.282 --- 10.0.0.2 ping statistics --- 00:09:10.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.282 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:09:10.282 00:09:10.282 --- 10.0.0.1 ping statistics --- 00:09:10.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.282 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2936948 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2936948 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2936948 ']' 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.282 21:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.282 [2024-07-24 21:34:17.638802] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:09:10.282 [2024-07-24 21:34:17.638853] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.282 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.282 [2024-07-24 21:34:17.699363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.282 [2024-07-24 21:34:17.775553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.282 [2024-07-24 21:34:17.775594] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.282 [2024-07-24 21:34:17.775602] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.282 [2024-07-24 21:34:17.775607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.282 [2024-07-24 21:34:17.775613] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.282 [2024-07-24 21:34:17.775665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.282 [2024-07-24 21:34:17.775763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.282 [2024-07-24 21:34:17.775850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.282 [2024-07-24 21:34:17.775852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.544 [2024-07-24 21:34:18.557267] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.544 Malloc0 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.544 [2024-07-24 21:34:18.620708] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2937197 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2937199 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:10.544 { 00:09:10.544 "params": { 00:09:10.544 "name": "Nvme$subsystem", 00:09:10.544 "trtype": "$TEST_TRANSPORT", 00:09:10.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.544 "adrfam": "ipv4", 00:09:10.544 "trsvcid": "$NVMF_PORT", 00:09:10.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.544 "hdgst": ${hdgst:-false}, 00:09:10.544 "ddgst": ${ddgst:-false} 00:09:10.544 }, 00:09:10.544 "method": "bdev_nvme_attach_controller" 00:09:10.544 } 00:09:10.544 EOF 00:09:10.544 )") 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2937201 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:10.544 { 00:09:10.544 "params": { 00:09:10.544 "name": "Nvme$subsystem", 00:09:10.544 "trtype": "$TEST_TRANSPORT", 00:09:10.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.544 "adrfam": "ipv4", 00:09:10.544 "trsvcid": "$NVMF_PORT", 00:09:10.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.544 "hdgst": ${hdgst:-false}, 00:09:10.544 "ddgst": ${ddgst:-false} 00:09:10.544 }, 00:09:10.544 "method": "bdev_nvme_attach_controller" 00:09:10.544 } 00:09:10.544 EOF 00:09:10.544 )") 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2937204 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:10.544 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:10.544 { 00:09:10.544 "params": { 00:09:10.544 "name": "Nvme$subsystem", 00:09:10.544 "trtype": "$TEST_TRANSPORT", 00:09:10.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.544 "adrfam": "ipv4", 00:09:10.544 "trsvcid": "$NVMF_PORT", 00:09:10.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.544 "hdgst": ${hdgst:-false}, 00:09:10.544 "ddgst": ${ddgst:-false} 00:09:10.544 }, 00:09:10.545 "method": "bdev_nvme_attach_controller" 00:09:10.545 } 00:09:10.545 EOF 00:09:10.545 )") 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:10.545 { 00:09:10.545 "params": { 00:09:10.545 "name": "Nvme$subsystem", 00:09:10.545 "trtype": "$TEST_TRANSPORT", 00:09:10.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.545 "adrfam": "ipv4", 00:09:10.545 "trsvcid": "$NVMF_PORT", 00:09:10.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.545 "hdgst": ${hdgst:-false}, 00:09:10.545 "ddgst": ${ddgst:-false} 00:09:10.545 }, 00:09:10.545 "method": "bdev_nvme_attach_controller" 00:09:10.545 } 00:09:10.545 EOF 00:09:10.545 )") 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2937197 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:10.545 "params": { 00:09:10.545 "name": "Nvme1", 00:09:10.545 "trtype": "tcp", 00:09:10.545 "traddr": "10.0.0.2", 00:09:10.545 "adrfam": "ipv4", 00:09:10.545 "trsvcid": "4420", 00:09:10.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.545 "hdgst": false, 00:09:10.545 "ddgst": false 00:09:10.545 }, 00:09:10.545 "method": "bdev_nvme_attach_controller" 00:09:10.545 }' 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:10.545 "params": { 00:09:10.545 "name": "Nvme1", 00:09:10.545 "trtype": "tcp", 00:09:10.545 "traddr": "10.0.0.2", 00:09:10.545 "adrfam": "ipv4", 00:09:10.545 "trsvcid": "4420", 00:09:10.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.545 "hdgst": false, 00:09:10.545 "ddgst": false 00:09:10.545 }, 00:09:10.545 "method": "bdev_nvme_attach_controller" 00:09:10.545 }' 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:10.545 "params": { 00:09:10.545 "name": "Nvme1", 00:09:10.545 "trtype": "tcp", 00:09:10.545 "traddr": "10.0.0.2", 00:09:10.545 "adrfam": "ipv4", 00:09:10.545 "trsvcid": "4420", 00:09:10.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.545 "hdgst": false, 00:09:10.545 "ddgst": false 00:09:10.545 }, 00:09:10.545 "method": "bdev_nvme_attach_controller" 00:09:10.545 }' 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:10.545 21:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:10.545 "params": { 00:09:10.545 "name": "Nvme1", 00:09:10.545 "trtype": "tcp", 00:09:10.545 "traddr": "10.0.0.2", 00:09:10.545 "adrfam": "ipv4", 00:09:10.545 "trsvcid": "4420", 00:09:10.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.545 "hdgst": false, 00:09:10.545 "ddgst": false 00:09:10.545 }, 00:09:10.545 "method": "bdev_nvme_attach_controller" 00:09:10.545 }' 00:09:10.805 [2024-07-24 21:34:18.670823] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:09:10.805 [2024-07-24 21:34:18.670823] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:09:10.805 [2024-07-24 21:34:18.670878] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 21:34:18.670878] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:10.805 --proc-type=auto ] 00:09:10.805 [2024-07-24 21:34:18.672050] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:09:10.805 [2024-07-24 21:34:18.672091] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:10.805 [2024-07-24 21:34:18.674013] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:09:10.805 [2024-07-24 21:34:18.674066] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:10.805 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.805 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.805 [2024-07-24 21:34:18.854673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.805 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.065 [2024-07-24 21:34:18.930652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:11.065 [2024-07-24 21:34:18.952880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.065 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.065 [2024-07-24 21:34:19.031033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:11.065 [2024-07-24 21:34:19.050512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.065 [2024-07-24 21:34:19.106203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.065 [2024-07-24 21:34:19.142134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:11.326 [2024-07-24 21:34:19.182085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:11.326 Running I/O for 1 seconds... 00:09:11.326 Running I/O for 1 seconds... 00:09:11.326 Running I/O for 1 seconds... 00:09:11.326 Running I/O for 1 seconds... 00:09:12.266 00:09:12.266 Latency(us) 00:09:12.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.266 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:12.266 Nvme1n1 : 1.01 13506.88 52.76 0.00 0.00 9442.05 2578.70 17666.23 00:09:12.266 =================================================================================================================== 00:09:12.266 Total : 13506.88 52.76 0.00 0.00 9442.05 2578.70 17666.23 00:09:12.266 00:09:12.266 Latency(us) 00:09:12.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.266 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:12.266 Nvme1n1 : 1.00 244945.54 956.82 0.00 0.00 521.04 216.38 1360.58 00:09:12.266 =================================================================================================================== 00:09:12.266 Total : 244945.54 956.82 0.00 0.00 521.04 216.38 1360.58 00:09:12.527 00:09:12.527 Latency(us) 00:09:12.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.527 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:12.527 Nvme1n1 : 1.01 12735.46 49.75 0.00 0.00 10019.41 5242.88 31685.23 00:09:12.527 =================================================================================================================== 00:09:12.527 Total : 12735.46 49.75 0.00 0.00 10019.41 5242.88 31685.23 00:09:12.527 00:09:12.527 Latency(us) 00:09:12.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.527 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:12.527 Nvme1n1 : 1.01 7050.68 27.54 0.00 0.00 18073.42 3818.18 72488.51 00:09:12.527 =================================================================================================================== 00:09:12.527 Total : 7050.68 27.54 0.00 0.00 18073.42 3818.18 72488.51 00:09:12.787 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2937199 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2937201 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2937204 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:12.788 rmmod nvme_tcp 00:09:12.788 rmmod nvme_fabrics 00:09:12.788 rmmod nvme_keyring 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2936948 ']' 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2936948 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2936948 ']' 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2936948 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2936948 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2936948' 00:09:12.788 killing process with pid 2936948 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2936948 00:09:12.788 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2936948 00:09:13.048 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.048 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.048 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.048 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.048 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.048 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.048 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.048 21:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.964 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:14.964 00:09:14.964 real 0m11.450s 00:09:14.964 user 0m20.054s 00:09:14.964 sys 0m6.060s 00:09:14.964 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.964 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.964 ************************************ 00:09:14.964 END TEST nvmf_bdev_io_wait 00:09:14.964 ************************************ 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.225 ************************************ 00:09:15.225 START TEST nvmf_queue_depth 00:09:15.225 ************************************ 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:15.225 * Looking for test storage... 00:09:15.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:15.225 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:21.797 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:21.797 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:21.797 Found net devices under 0000:86:00.0: cvl_0_0 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:21.797 Found net devices under 0000:86:00.1: cvl_0_1 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.797 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:21.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:09:21.798 00:09:21.798 --- 10.0.0.2 ping statistics --- 00:09:21.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.798 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:09:21.798 00:09:21.798 --- 10.0.0.1 ping statistics --- 00:09:21.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.798 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2941089 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2941089 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2941089 ']' 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.798 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.798 [2024-07-24 21:34:28.992312] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:09:21.798 [2024-07-24 21:34:28.992360] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.798 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.798 [2024-07-24 21:34:29.050442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.798 [2024-07-24 21:34:29.129477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.798 [2024-07-24 21:34:29.129512] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.798 [2024-07-24 21:34:29.129520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.798 [2024-07-24 21:34:29.129526] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.798 [2024-07-24 21:34:29.129531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.798 [2024-07-24 21:34:29.129547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.798 [2024-07-24 21:34:29.844708] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.798 Malloc0 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.798 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.798 [2024-07-24 21:34:29.910089] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.059 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.059 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2941239 00:09:22.059 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:22.059 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:22.059 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2941239 /var/tmp/bdevperf.sock 00:09:22.059 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2941239 ']' 00:09:22.059 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:22.059 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:22.059 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:22.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:22.059 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:22.059 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.059 [2024-07-24 21:34:29.961010] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:09:22.059 [2024-07-24 21:34:29.961071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941239 ] 00:09:22.059 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.059 [2024-07-24 21:34:30.016097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.059 [2024-07-24 21:34:30.101588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.997 21:34:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.997 21:34:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:22.998 21:34:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:22.998 21:34:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.998 21:34:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.998 NVMe0n1 00:09:22.998 21:34:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.998 21:34:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:22.998 Running I/O for 10 seconds... 00:09:32.986 00:09:32.986 Latency(us) 00:09:32.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.986 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:32.986 Verification LBA range: start 0x0 length 0x4000 00:09:32.986 NVMe0n1 : 10.07 12094.77 47.25 0.00 0.00 84393.77 20857.54 62458.66 00:09:32.986 =================================================================================================================== 00:09:32.986 Total : 12094.77 47.25 0.00 0.00 84393.77 20857.54 62458.66 00:09:32.986 0 00:09:32.986 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2941239 00:09:32.986 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2941239 ']' 00:09:32.986 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2941239 00:09:32.986 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2941239 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2941239' 00:09:33.246 killing process with pid 2941239 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2941239 00:09:33.246 Received shutdown signal, test time was about 10.000000 seconds 00:09:33.246 00:09:33.246 Latency(us) 00:09:33.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.246 =================================================================================================================== 00:09:33.246 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2941239 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.246 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.246 rmmod nvme_tcp 00:09:33.246 rmmod nvme_fabrics 00:09:33.506 rmmod nvme_keyring 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2941089 ']' 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2941089 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2941089 ']' 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2941089 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2941089 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2941089' 00:09:33.506 killing process with pid 2941089 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2941089 00:09:33.506 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2941089 00:09:33.766 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.766 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.766 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.766 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.766 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.766 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.766 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.766 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.676 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:35.676 00:09:35.676 real 0m20.583s 00:09:35.676 user 0m25.039s 00:09:35.676 sys 0m5.802s 00:09:35.676 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.676 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.676 ************************************ 00:09:35.676 END TEST nvmf_queue_depth 00:09:35.676 ************************************ 00:09:35.676 21:34:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:35.676 21:34:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:35.676 21:34:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.676 21:34:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.676 ************************************ 00:09:35.676 START TEST nvmf_target_multipath 00:09:35.676 ************************************ 00:09:35.676 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:35.936 * Looking for test storage... 00:09:35.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.936 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:35.937 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:41.220 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:41.220 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:41.220 Found net devices under 0000:86:00.0: cvl_0_0 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:41.220 Found net devices under 0000:86:00.1: cvl_0_1 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.220 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.221 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:41.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:09:41.482 00:09:41.482 --- 10.0.0.2 ping statistics --- 00:09:41.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.482 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:09:41.482 00:09:41.482 --- 10.0.0.1 ping statistics --- 00:09:41.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.482 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:41.482 only one NIC for nvmf test 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:41.482 rmmod nvme_tcp 00:09:41.482 rmmod nvme_fabrics 00:09:41.482 rmmod nvme_keyring 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.482 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.391 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:43.391 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:43.391 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:43.391 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:43.391 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.392 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:43.652 00:09:43.652 real 0m7.738s 00:09:43.652 user 0m1.462s 00:09:43.652 sys 0m4.226s 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:43.652 ************************************ 00:09:43.652 END TEST nvmf_target_multipath 00:09:43.652 ************************************ 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.652 ************************************ 00:09:43.652 START TEST nvmf_zcopy 00:09:43.652 ************************************ 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:43.652 * Looking for test storage... 00:09:43.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:43.652 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:48.963 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:48.963 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:48.963 Found net devices under 0000:86:00.0: cvl_0_0 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:48.963 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:48.964 Found net devices under 0000:86:00.1: cvl_0_1 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:48.964 21:34:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:48.964 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:48.964 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:48.964 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:48.964 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:49.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:09:49.225 00:09:49.225 --- 10.0.0.2 ping statistics --- 00:09:49.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.225 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:09:49.225 00:09:49.225 --- 10.0.0.1 ping statistics --- 00:09:49.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.225 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2950112 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2950112 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2950112 ']' 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.225 21:34:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.225 [2024-07-24 21:34:57.226463] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:09:49.225 [2024-07-24 21:34:57.226506] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.225 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.225 [2024-07-24 21:34:57.281831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.484 [2024-07-24 21:34:57.361057] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.484 [2024-07-24 21:34:57.361090] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.484 [2024-07-24 21:34:57.361097] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.484 [2024-07-24 21:34:57.361104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.484 [2024-07-24 21:34:57.361109] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.484 [2024-07-24 21:34:57.361146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.052 [2024-07-24 21:34:58.064356] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.052 [2024-07-24 21:34:58.084500] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.052 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.053 malloc0 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:50.053 { 00:09:50.053 "params": { 00:09:50.053 "name": "Nvme$subsystem", 00:09:50.053 "trtype": "$TEST_TRANSPORT", 00:09:50.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.053 "adrfam": "ipv4", 00:09:50.053 "trsvcid": "$NVMF_PORT", 00:09:50.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.053 "hdgst": ${hdgst:-false}, 00:09:50.053 "ddgst": ${ddgst:-false} 00:09:50.053 }, 00:09:50.053 "method": "bdev_nvme_attach_controller" 00:09:50.053 } 00:09:50.053 EOF 00:09:50.053 )") 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:50.053 21:34:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:50.053 "params": { 00:09:50.053 "name": "Nvme1", 00:09:50.053 "trtype": "tcp", 00:09:50.053 "traddr": "10.0.0.2", 00:09:50.053 "adrfam": "ipv4", 00:09:50.053 "trsvcid": "4420", 00:09:50.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:50.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:50.053 "hdgst": false, 00:09:50.053 "ddgst": false 00:09:50.053 }, 00:09:50.053 "method": "bdev_nvme_attach_controller" 00:09:50.053 }' 00:09:50.312 [2024-07-24 21:34:58.182444] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:09:50.312 [2024-07-24 21:34:58.182488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2950285 ] 00:09:50.312 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.312 [2024-07-24 21:34:58.235573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.312 [2024-07-24 21:34:58.309526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.571 Running I/O for 10 seconds... 00:10:00.551 00:10:00.551 Latency(us) 00:10:00.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.551 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:00.551 Verification LBA range: start 0x0 length 0x1000 00:10:00.551 Nvme1n1 : 10.05 7874.97 61.52 0.00 0.00 16151.45 2023.07 51289.04 00:10:00.551 =================================================================================================================== 00:10:00.551 Total : 7874.97 61.52 0.00 0.00 16151.45 2023.07 51289.04 00:10:00.812 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2951984 00:10:00.812 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:00.812 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.812 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:00.812 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:00.812 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:00.812 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:00.812 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:00.812 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:00.812 { 00:10:00.812 "params": { 00:10:00.812 "name": "Nvme$subsystem", 00:10:00.812 "trtype": "$TEST_TRANSPORT", 00:10:00.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.812 "adrfam": "ipv4", 00:10:00.812 "trsvcid": "$NVMF_PORT", 00:10:00.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.812 "hdgst": ${hdgst:-false}, 00:10:00.812 "ddgst": ${ddgst:-false} 00:10:00.812 }, 00:10:00.812 "method": "bdev_nvme_attach_controller" 00:10:00.812 } 00:10:00.812 EOF 00:10:00.812 )") 00:10:00.812 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:00.812 [2024-07-24 21:35:08.734002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.734035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:00.812 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:00.812 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:00.812 "params": { 00:10:00.812 "name": "Nvme1", 00:10:00.812 "trtype": "tcp", 00:10:00.812 "traddr": "10.0.0.2", 00:10:00.812 "adrfam": "ipv4", 00:10:00.812 "trsvcid": "4420", 00:10:00.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.812 "hdgst": false, 00:10:00.812 "ddgst": false 00:10:00.812 }, 00:10:00.812 "method": "bdev_nvme_attach_controller" 00:10:00.812 }' 00:10:00.812 [2024-07-24 21:35:08.746001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.746012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 [2024-07-24 21:35:08.758029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.758038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 [2024-07-24 21:35:08.770063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.770072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 [2024-07-24 21:35:08.770838] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:10:00.812 [2024-07-24 21:35:08.770879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2951984 ] 00:10:00.812 [2024-07-24 21:35:08.782095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.782104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.812 [2024-07-24 21:35:08.794121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.794130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 [2024-07-24 21:35:08.806155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.806164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 [2024-07-24 21:35:08.818189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.818198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 [2024-07-24 21:35:08.823907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.812 [2024-07-24 21:35:08.830222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.830233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 [2024-07-24 21:35:08.842251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.842262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 [2024-07-24 21:35:08.854286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.854295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 [2024-07-24 21:35:08.866319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.866338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 [2024-07-24 21:35:08.878350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.878360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 [2024-07-24 21:35:08.890383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.890394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 [2024-07-24 21:35:08.899617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.812 [2024-07-24 21:35:08.902417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.902427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 [2024-07-24 21:35:08.914454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.914473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.812 [2024-07-24 21:35:08.926483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.812 [2024-07-24 21:35:08.926496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:08.938513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:08.938526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:08.950544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:08.950556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:08.962572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:08.962586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:08.974608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:08.974619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:08.986645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:08.986661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:08.998684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:08.998700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:09.010713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:09.010728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:09.022738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:09.022748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:09.034773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:09.034782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:09.046805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:09.046815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:09.058843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:09.058857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:09.070883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:09.070897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:09.082905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:09.082915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:09.094944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:09.094961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 Running I/O for 5 seconds... 00:10:01.072 [2024-07-24 21:35:09.116060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:09.116094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:09.133539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:09.133557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:09.146914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:09.146932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:09.154736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.072 [2024-07-24 21:35:09.154755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.072 [2024-07-24 21:35:09.171012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.073 [2024-07-24 21:35:09.171031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.073 [2024-07-24 21:35:09.185803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.073 [2024-07-24 21:35:09.185821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.332 [2024-07-24 21:35:09.201038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.201064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.211368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.211387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.220558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.220576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.229762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.229780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.245755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.245774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.255635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.255653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.264192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.264209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.273616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.273634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.282630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.282648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.297191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.297210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.310099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.310117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.320008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.320027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.329281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.329299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.345249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.345267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.361229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.361247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.371733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.371751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.388913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.388931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.403525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.403543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.419504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.419526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-07-24 21:35:09.434556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-07-24 21:35:09.434574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.450751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.450770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.463097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.463116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.471204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.471221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.480019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.480037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.495402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.495420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.505794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.505811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.517897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.517915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.532719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.532737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.547561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.547579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.563548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.563567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.574782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.574800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.589198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.589218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.597130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.597148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.611344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.611362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.628399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.628417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.644528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.644546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.656296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.656314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.670999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.671021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.685466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.685484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.593 [2024-07-24 21:35:09.701529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.593 [2024-07-24 21:35:09.701546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.716301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.716320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.724015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.724032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.737490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.737508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.745437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.745454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.759730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.759748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.771354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.771372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.789011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.789030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.804218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.804236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.819176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.819194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.834062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.834079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.845792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.845810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.854571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.854589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.871138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.871157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.885488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.885507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.900620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.900639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.908525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.908543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.922614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.922637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.937452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.937470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.853 [2024-07-24 21:35:09.954264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.853 [2024-07-24 21:35:09.954282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:09.968766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:09.968785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:09.982625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:09.982643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:09.996344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:09.996362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.011393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.011417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.026182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.026201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.040290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.040308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.055550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.055569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.070514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.070533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.081508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.081527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.097701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.097720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.113871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.113889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.129206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.129224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.145101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.145119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.159160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.159178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.174727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.174746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.188877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.188896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.203575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.203597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.113 [2024-07-24 21:35:10.215144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.113 [2024-07-24 21:35:10.215162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.372 [2024-07-24 21:35:10.229627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.372 [2024-07-24 21:35:10.229646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.372 [2024-07-24 21:35:10.244755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.372 [2024-07-24 21:35:10.244773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.372 [2024-07-24 21:35:10.259616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.372 [2024-07-24 21:35:10.259634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.372 [2024-07-24 21:35:10.268491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.372 [2024-07-24 21:35:10.268509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.279030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.279053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.288458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.288476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.297394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.297411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.314344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.314362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.330056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.330075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.345022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.345041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.360248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.360267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.369152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.369171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.385823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.385843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.399984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.400003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.414425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.414444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.429752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.429770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.438603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.438621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.453003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.453026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.465594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.465613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.479600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.479619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.373 [2024-07-24 21:35:10.487085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.373 [2024-07-24 21:35:10.487103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.497242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.497261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.512587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.512606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.523258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.523275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.532654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.532672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.541688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.541707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.550716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.550734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.565092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.565111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.574316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.574334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.583705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.583722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.593555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.593572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.607927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.607947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.622685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.622703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.637858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.637877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.652264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.652282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.662640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.662658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.671587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.671605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.686223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.686243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.697406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.697424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.713341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.713360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.728489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.728507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.633 [2024-07-24 21:35:10.744402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.633 [2024-07-24 21:35:10.744421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.893 [2024-07-24 21:35:10.761263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.761282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.776337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.776354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.786218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.786236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.800787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.800806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.812219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.812237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.826462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.826480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.840690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.840707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.851419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.851437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.865138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.865156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.879742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.879760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.891769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.891787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.903639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.903657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.919203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.919222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.934576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.934595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.943787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.943804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.965511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.965529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.978056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.978074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:10.992587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:10.992606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-07-24 21:35:11.003537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-07-24 21:35:11.003555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.019133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.019151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.034400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.034418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.047613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.047631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.056874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.056893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.065513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.065531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.074723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.074740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.089069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.089087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.102990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.103008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.111963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.111981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.130134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.130152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.141389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.141406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.155928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.155946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.170130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.170148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.177656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.177674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.192603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.192621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.208250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.208268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.221936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.221954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.234886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.234904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.249601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.249619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-07-24 21:35:11.260897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-07-24 21:35:11.260914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.276384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.276402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.296504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.296522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.312083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.312102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.327403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.327421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.337109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.337127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.352341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.352358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.368627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.368646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.384237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.384255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.393297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.393315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.401088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.401106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.415919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.415937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.429410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.429434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.448365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.448383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.464811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.464830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.475527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.475546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.485245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.485263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.499734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.499752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-07-24 21:35:11.514529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-07-24 21:35:11.514546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.531301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.531320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.546253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.546282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.560216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.560234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.572516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.572534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.584669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.584687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.599242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.599259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.614756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.614775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.631038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.631069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.645780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.645798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.659920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.659939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.674671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.674689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.686129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.686147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.695248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.695271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.710161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.710181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.721732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.721750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.730814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.730832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.746458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.746477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.757315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.757334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.771856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.771875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-07-24 21:35:11.786453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-07-24 21:35:11.786472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.802758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.802777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.817737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.817756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.832514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.832532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.843466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.843485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.857617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.857636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.871499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.871519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.882684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.882702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.892276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.892294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.908844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.908862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.923497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.923516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.934732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.934751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.948920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.948944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.962697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.962715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.978841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.978859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:11.994005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:11.994024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:12.005616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:12.005634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:12.014801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:12.014819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:12.028807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:12.028824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-07-24 21:35:12.042783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-07-24 21:35:12.042802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.194 [2024-07-24 21:35:12.054461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.194 [2024-07-24 21:35:12.054480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.194 [2024-07-24 21:35:12.068722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.068740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.083028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.083053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.093830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.093847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.108310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.108329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.119270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.119287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.128371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.128388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.143181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.143199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.154570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.154587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.169264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.169282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.185001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.185018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.200091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.200113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.210624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.210642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.219546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.219563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.233218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.233236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.247902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.247921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.261755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.261772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.275792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.275810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.287096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.287115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.301581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.301599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-07-24 21:35:12.309279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-07-24 21:35:12.309296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.322317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.322336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.336293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.336311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.347032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.347056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.361557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.361577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.374818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.374835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.388140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.388158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.401852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.401869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.415705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.415723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.427839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.427858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.434931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.434952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.454796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.454814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.469554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.469573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.483486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.483503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.505002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.505020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.518593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.518611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.533099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.533123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.549702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.549721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.455 [2024-07-24 21:35:12.560900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.455 [2024-07-24 21:35:12.560918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.715 [2024-07-24 21:35:12.576901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.576921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.593009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.593026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.608410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.608429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.620086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.620104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.634470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.634490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.648848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.648867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.666524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.666542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.673882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.673900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.683935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.683953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.698880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.698898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.715404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.715421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.725948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.725965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.740941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.740959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.756674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.756692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.773475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.773494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.787613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.787632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.802504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.802522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.813505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.813523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.716 [2024-07-24 21:35:12.828720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.716 [2024-07-24 21:35:12.828738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:12.847288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:12.847307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:12.861141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:12.861159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:12.869183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:12.869201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:12.882890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:12.882908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:12.898864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:12.898882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:12.914172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:12.914191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:12.925393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:12.925411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:12.934682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:12.934701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:12.944282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:12.944301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:12.953142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:12.953159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:12.967686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:12.967705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:12.978814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:12.978832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:12.988100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:12.988118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:13.002123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:13.002141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:13.016932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:13.016950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:13.032998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:13.033016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:13.050426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:13.050444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:13.061069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:13.061087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:13.070758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:13.070776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.976 [2024-07-24 21:35:13.086268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.976 [2024-07-24 21:35:13.086286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.236 [2024-07-24 21:35:13.101553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.236 [2024-07-24 21:35:13.101573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.236 [2024-07-24 21:35:13.115485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.236 [2024-07-24 21:35:13.115504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.236 [2024-07-24 21:35:13.130716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.236 [2024-07-24 21:35:13.130740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.236 [2024-07-24 21:35:13.145749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.236 [2024-07-24 21:35:13.145769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.236 [2024-07-24 21:35:13.157266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.236 [2024-07-24 21:35:13.157285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.236 [2024-07-24 21:35:13.165775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.236 [2024-07-24 21:35:13.165792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.236 [2024-07-24 21:35:13.180842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.236 [2024-07-24 21:35:13.180860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.236 [2024-07-24 21:35:13.191457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.236 [2024-07-24 21:35:13.191475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.236 [2024-07-24 21:35:13.205859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.236 [2024-07-24 21:35:13.205878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.236 [2024-07-24 21:35:13.220034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.236 [2024-07-24 21:35:13.220063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.236 [2024-07-24 21:35:13.231613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.236 [2024-07-24 21:35:13.231632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.236 [2024-07-24 21:35:13.246533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.236 [2024-07-24 21:35:13.246551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.236 [2024-07-24 21:35:13.262324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.236 [2024-07-24 21:35:13.262345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.236 [2024-07-24 21:35:13.273111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.236 [2024-07-24 21:35:13.273130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.237 [2024-07-24 21:35:13.287843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.237 [2024-07-24 21:35:13.287862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.237 [2024-07-24 21:35:13.295355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.237 [2024-07-24 21:35:13.295373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.237 [2024-07-24 21:35:13.308270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.237 [2024-07-24 21:35:13.308289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.237 [2024-07-24 21:35:13.322326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.237 [2024-07-24 21:35:13.322345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.237 [2024-07-24 21:35:13.335181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.237 [2024-07-24 21:35:13.335204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.237 [2024-07-24 21:35:13.344208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.237 [2024-07-24 21:35:13.344226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.495 [2024-07-24 21:35:13.360230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.360251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.370458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.370476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.385269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.385287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.402740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.402759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.419129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.419148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.436712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.436730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.450747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.450766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.463716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.463739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.473331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.473349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.490508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.490526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.503691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.503710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.517910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.517928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.532181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.532198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.544586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.544604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.558731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.558749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.571556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.571574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.588362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.588381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.496 [2024-07-24 21:35:13.602800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.496 [2024-07-24 21:35:13.602818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.619387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.619406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.638828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.638845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.653026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.653051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.670318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.670337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.685294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.685313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.695618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.695636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.702371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.702388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.714822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.714840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.732437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.732459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.740167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.740189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.755356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.755374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.766069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.766087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.781351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.781368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.797430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.797447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.808117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.808134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.822360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.822379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.836192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.836210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.848216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.848234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.755 [2024-07-24 21:35:13.862826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.755 [2024-07-24 21:35:13.862843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:13.875056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:13.875076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:13.889295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:13.889314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:13.896696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:13.896714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:13.911154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:13.911172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:13.925973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:13.925992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:13.935654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:13.935672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:13.950813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:13.950831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:13.964347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:13.964366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:13.978451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:13.978473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:13.991579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:13.991597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:14.006159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:14.006177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:14.022531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:14.022549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:14.033201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:14.033220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:14.048180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:14.048198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:14.064036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:14.064059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:14.077838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:14.077856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:14.093014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:14.093032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:14.108163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:14.108181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 [2024-07-24 21:35:14.119331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:14.119349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.016 00:10:06.016 Latency(us) 00:10:06.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.016 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:06.016 Nvme1n1 : 5.01 15620.95 122.04 0.00 0.00 8183.20 2108.55 32141.13 00:10:06.016 =================================================================================================================== 00:10:06.016 Total : 15620.95 122.04 0.00 0.00 8183.20 2108.55 32141.13 00:10:06.016 [2024-07-24 21:35:14.128308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.016 [2024-07-24 21:35:14.128324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.140351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.140365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.152381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.152398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.164401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.164415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.176435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.176447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.188465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.188484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.200498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.200510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.212527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.212540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.224559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.224570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.236592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.236601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.248622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.248633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.260655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.260665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.272686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.272694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.284718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.284729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.296751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.296759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 [2024-07-24 21:35:14.308782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.278 [2024-07-24 21:35:14.308790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2951984) - No such process 00:10:06.278 21:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2951984 00:10:06.278 21:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.278 21:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.278 21:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.278 21:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.278 21:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:06.278 21:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.278 21:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.278 delay0 00:10:06.278 21:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.278 21:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:06.278 21:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.278 21:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.278 21:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.278 21:35:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:06.278 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.538 [2024-07-24 21:35:14.395519] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:13.116 Initializing NVMe Controllers 00:10:13.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:13.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:13.116 Initialization complete. Launching workers. 00:10:13.116 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 78 00:10:13.116 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 363, failed to submit 35 00:10:13.117 success 154, unsuccess 209, failed 0 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:13.117 rmmod nvme_tcp 00:10:13.117 rmmod nvme_fabrics 00:10:13.117 rmmod nvme_keyring 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2950112 ']' 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2950112 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2950112 ']' 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2950112 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2950112 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2950112' 00:10:13.117 killing process with pid 2950112 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2950112 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2950112 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.117 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.060 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:15.060 00:10:15.060 real 0m31.476s 00:10:15.060 user 0m42.660s 00:10:15.060 sys 0m10.490s 00:10:15.060 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:15.060 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.060 ************************************ 00:10:15.060 END TEST nvmf_zcopy 00:10:15.060 ************************************ 00:10:15.060 21:35:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:15.060 21:35:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:15.060 21:35:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.060 21:35:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.060 ************************************ 00:10:15.060 START TEST nvmf_nmic 00:10:15.060 ************************************ 00:10:15.060 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:15.320 * Looking for test storage... 00:10:15.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:15.321 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:20.601 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:20.601 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:20.601 Found net devices under 0000:86:00.0: cvl_0_0 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:20.601 Found net devices under 0000:86:00.1: cvl_0_1 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:20.601 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:20.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:10:20.602 00:10:20.602 --- 10.0.0.2 ping statistics --- 00:10:20.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.602 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:10:20.602 00:10:20.602 --- 10.0.0.1 ping statistics --- 00:10:20.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.602 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2957428 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2957428 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2957428 ']' 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:20.602 21:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.602 [2024-07-24 21:35:28.614537] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:10:20.602 [2024-07-24 21:35:28.614593] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.602 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.602 [2024-07-24 21:35:28.671817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.862 [2024-07-24 21:35:28.754790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.862 [2024-07-24 21:35:28.754827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.862 [2024-07-24 21:35:28.754834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.862 [2024-07-24 21:35:28.754841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.862 [2024-07-24 21:35:28.754846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.862 [2024-07-24 21:35:28.754887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.862 [2024-07-24 21:35:28.754981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.862 [2024-07-24 21:35:28.755072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.862 [2024-07-24 21:35:28.755074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.432 [2024-07-24 21:35:29.473249] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.432 Malloc0 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.432 [2024-07-24 21:35:29.520964] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.432 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.433 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:21.433 test case1: single bdev can't be used in multiple subsystems 00:10:21.433 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:21.433 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.433 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.433 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.433 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:21.433 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.433 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.433 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.433 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:21.433 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:21.433 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.433 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.433 [2024-07-24 21:35:29.544871] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:21.433 [2024-07-24 21:35:29.544889] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:21.433 [2024-07-24 21:35:29.544896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.692 request: 00:10:21.692 { 00:10:21.692 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:21.692 "namespace": { 00:10:21.692 "bdev_name": "Malloc0", 00:10:21.692 "no_auto_visible": false 00:10:21.692 }, 00:10:21.692 "method": "nvmf_subsystem_add_ns", 00:10:21.692 "req_id": 1 00:10:21.692 } 00:10:21.692 Got JSON-RPC error response 00:10:21.692 response: 00:10:21.692 { 00:10:21.692 "code": -32602, 00:10:21.692 "message": "Invalid parameters" 00:10:21.692 } 00:10:21.692 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:21.692 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:21.692 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:21.692 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:21.692 Adding namespace failed - expected result. 00:10:21.692 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:21.692 test case2: host connect to nvmf target in multiple paths 00:10:21.692 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:21.692 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.692 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.692 [2024-07-24 21:35:29.556986] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:21.692 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.692 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:22.640 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:24.020 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:24.020 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1196 -- # local i=0 00:10:24.020 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.020 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:24.020 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # sleep 2 00:10:25.932 21:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:25.932 21:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:25.932 21:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:25.932 21:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:25.932 21:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:25.932 21:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # return 0 00:10:25.932 21:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:25.932 [global] 00:10:25.932 thread=1 00:10:25.932 invalidate=1 00:10:25.932 rw=write 00:10:25.932 time_based=1 00:10:25.932 runtime=1 00:10:25.932 ioengine=libaio 00:10:25.932 direct=1 00:10:25.932 bs=4096 00:10:25.932 iodepth=1 00:10:25.932 norandommap=0 00:10:25.932 numjobs=1 00:10:25.932 00:10:25.932 verify_dump=1 00:10:25.932 verify_backlog=512 00:10:25.932 verify_state_save=0 00:10:25.932 do_verify=1 00:10:25.932 verify=crc32c-intel 00:10:25.932 [job0] 00:10:25.932 filename=/dev/nvme0n1 00:10:25.932 Could not set queue depth (nvme0n1) 00:10:26.192 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.192 fio-3.35 00:10:26.192 Starting 1 thread 00:10:27.576 00:10:27.576 job0: (groupid=0, jobs=1): err= 0: pid=2958500: Wed Jul 24 21:35:35 2024 00:10:27.576 read: IOPS=19, BW=78.4KiB/s (80.3kB/s)(80.0KiB/1020msec) 00:10:27.576 slat (nsec): min=9684, max=24096, avg=21029.35, stdev=2771.60 00:10:27.576 clat (usec): min=41846, max=42041, avg=41964.64, stdev=47.46 00:10:27.576 lat (usec): min=41868, max=42062, avg=41985.67, stdev=48.56 00:10:27.576 clat percentiles (usec): 00:10:27.576 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:10:27.576 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:27.576 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:27.576 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:27.576 | 99.99th=[42206] 00:10:27.576 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:10:27.576 slat (nsec): min=10165, max=41202, avg=13121.02, stdev=4787.98 00:10:27.576 clat (usec): min=221, max=876, avg=335.49, stdev=176.20 00:10:27.576 lat (usec): min=232, max=909, avg=348.61, stdev=179.54 00:10:27.576 clat percentiles (usec): 00:10:27.576 | 1.00th=[ 225], 5.00th=[ 227], 10.00th=[ 229], 20.00th=[ 233], 00:10:27.576 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 273], 00:10:27.576 | 70.00th=[ 281], 80.00th=[ 445], 90.00th=[ 693], 95.00th=[ 783], 00:10:27.576 | 99.00th=[ 840], 99.50th=[ 865], 99.90th=[ 881], 99.95th=[ 881], 00:10:27.576 | 99.99th=[ 881] 00:10:27.576 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:27.576 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:27.576 lat (usec) : 250=53.01%, 500=27.07%, 750=10.53%, 1000=5.64% 00:10:27.576 lat (msec) : 50=3.76% 00:10:27.576 cpu : usr=0.29%, sys=1.18%, ctx=532, majf=0, minf=2 00:10:27.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.576 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.576 00:10:27.576 Run status group 0 (all jobs): 00:10:27.576 READ: bw=78.4KiB/s (80.3kB/s), 78.4KiB/s-78.4KiB/s (80.3kB/s-80.3kB/s), io=80.0KiB (81.9kB), run=1020-1020msec 00:10:27.576 WRITE: bw=2008KiB/s (2056kB/s), 2008KiB/s-2008KiB/s (2056kB/s-2056kB/s), io=2048KiB (2097kB), run=1020-1020msec 00:10:27.576 00:10:27.576 Disk stats (read/write): 00:10:27.576 nvme0n1: ios=67/512, merge=0/0, ticks=788/162, in_queue=950, util=93.29% 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1217 -- # local i=0 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # return 0 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.576 rmmod nvme_tcp 00:10:27.576 rmmod nvme_fabrics 00:10:27.576 rmmod nvme_keyring 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2957428 ']' 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2957428 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2957428 ']' 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2957428 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2957428 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2957428' 00:10:27.576 killing process with pid 2957428 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2957428 00:10:27.576 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2957428 00:10:27.837 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:27.837 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:27.837 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:27.837 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.837 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:27.837 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.837 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.837 21:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.749 21:35:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:29.749 00:10:29.749 real 0m14.732s 00:10:29.749 user 0m34.950s 00:10:29.749 sys 0m4.726s 00:10:29.749 21:35:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:29.749 21:35:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.749 ************************************ 00:10:29.749 END TEST nvmf_nmic 00:10:29.749 ************************************ 00:10:30.011 21:35:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:30.011 21:35:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:30.011 21:35:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.011 21:35:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.011 ************************************ 00:10:30.011 START TEST nvmf_fio_target 00:10:30.011 ************************************ 00:10:30.011 21:35:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:30.011 * Looking for test storage... 00:10:30.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:30.011 21:35:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:35.299 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:35.299 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:35.299 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:35.300 Found net devices under 0000:86:00.0: cvl_0_0 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:35.300 Found net devices under 0000:86:00.1: cvl_0_1 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:35.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:10:35.300 00:10:35.300 --- 10.0.0.2 ping statistics --- 00:10:35.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.300 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:35.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:10:35.300 00:10:35.300 --- 10.0.0.1 ping statistics --- 00:10:35.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.300 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:35.300 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:35.560 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:35.560 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:35.560 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:35.560 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.560 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2962178 00:10:35.560 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.560 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2962178 00:10:35.560 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2962178 ']' 00:10:35.560 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.560 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:35.560 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.560 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:35.560 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.560 [2024-07-24 21:35:43.476114] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:10:35.560 [2024-07-24 21:35:43.476161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.560 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.560 [2024-07-24 21:35:43.534612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.560 [2024-07-24 21:35:43.616146] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.560 [2024-07-24 21:35:43.616190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.560 [2024-07-24 21:35:43.616198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.560 [2024-07-24 21:35:43.616204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.560 [2024-07-24 21:35:43.616209] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.560 [2024-07-24 21:35:43.616253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.560 [2024-07-24 21:35:43.616348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.560 [2024-07-24 21:35:43.616369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.560 [2024-07-24 21:35:43.616371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.558 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:36.558 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:36.558 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:36.558 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:36.558 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.558 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.558 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:36.558 [2024-07-24 21:35:44.484822] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.558 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.819 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:36.819 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.819 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:36.819 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.079 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:37.079 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.339 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:37.339 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:37.598 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.598 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:37.598 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.858 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:37.858 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.119 21:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:38.119 21:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:38.119 21:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:38.380 21:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:38.380 21:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:38.639 21:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:38.640 21:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:38.899 21:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.899 [2024-07-24 21:35:46.938000] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.899 21:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:39.159 21:35:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:39.419 21:35:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:40.359 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:40.359 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local i=0 00:10:40.359 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.359 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # [[ -n 4 ]] 00:10:40.359 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # nvme_device_counter=4 00:10:40.359 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # sleep 2 00:10:42.902 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:42.902 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:42.902 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:42.902 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_devices=4 00:10:42.902 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:42.902 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # return 0 00:10:42.902 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:42.902 [global] 00:10:42.902 thread=1 00:10:42.902 invalidate=1 00:10:42.902 rw=write 00:10:42.902 time_based=1 00:10:42.902 runtime=1 00:10:42.902 ioengine=libaio 00:10:42.902 direct=1 00:10:42.902 bs=4096 00:10:42.902 iodepth=1 00:10:42.902 norandommap=0 00:10:42.902 numjobs=1 00:10:42.902 00:10:42.902 verify_dump=1 00:10:42.902 verify_backlog=512 00:10:42.902 verify_state_save=0 00:10:42.902 do_verify=1 00:10:42.902 verify=crc32c-intel 00:10:42.902 [job0] 00:10:42.902 filename=/dev/nvme0n1 00:10:42.902 [job1] 00:10:42.902 filename=/dev/nvme0n2 00:10:42.902 [job2] 00:10:42.902 filename=/dev/nvme0n3 00:10:42.902 [job3] 00:10:42.902 filename=/dev/nvme0n4 00:10:42.902 Could not set queue depth (nvme0n1) 00:10:42.902 Could not set queue depth (nvme0n2) 00:10:42.902 Could not set queue depth (nvme0n3) 00:10:42.902 Could not set queue depth (nvme0n4) 00:10:42.902 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.902 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.902 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.902 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.902 fio-3.35 00:10:42.902 Starting 4 threads 00:10:44.282 00:10:44.282 job0: (groupid=0, jobs=1): err= 0: pid=2963544: Wed Jul 24 21:35:51 2024 00:10:44.282 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:44.282 slat (nsec): min=7642, max=40730, avg=8772.90, stdev=1912.77 00:10:44.282 clat (usec): min=342, max=998, avg=512.22, stdev=52.67 00:10:44.282 lat (usec): min=353, max=1007, avg=521.00, stdev=52.73 00:10:44.282 clat percentiles (usec): 00:10:44.282 | 1.00th=[ 363], 5.00th=[ 420], 10.00th=[ 474], 20.00th=[ 494], 00:10:44.282 | 30.00th=[ 502], 40.00th=[ 510], 50.00th=[ 515], 60.00th=[ 523], 00:10:44.282 | 70.00th=[ 529], 80.00th=[ 529], 90.00th=[ 545], 95.00th=[ 553], 00:10:44.282 | 99.00th=[ 734], 99.50th=[ 791], 99.90th=[ 824], 99.95th=[ 996], 00:10:44.282 | 99.99th=[ 996] 00:10:44.282 write: IOPS=1456, BW=5826KiB/s (5966kB/s)(5832KiB/1001msec); 0 zone resets 00:10:44.282 slat (usec): min=10, max=2684, avg=16.24, stdev=90.70 00:10:44.282 clat (usec): min=224, max=1199, avg=297.50, stdev=91.57 00:10:44.282 lat (usec): min=236, max=3214, avg=313.74, stdev=133.64 00:10:44.282 clat percentiles (usec): 00:10:44.282 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:10:44.282 | 30.00th=[ 245], 40.00th=[ 260], 50.00th=[ 277], 60.00th=[ 285], 00:10:44.282 | 70.00th=[ 306], 80.00th=[ 334], 90.00th=[ 379], 95.00th=[ 420], 00:10:44.282 | 99.00th=[ 668], 99.50th=[ 889], 99.90th=[ 1123], 99.95th=[ 1205], 00:10:44.282 | 99.99th=[ 1205] 00:10:44.282 bw ( KiB/s): min= 5144, max= 5144, per=24.85%, avg=5144.00, stdev= 0.00, samples=1 00:10:44.282 iops : min= 1286, max= 1286, avg=1286.00, stdev= 0.00, samples=1 00:10:44.282 lat (usec) : 250=19.90%, 500=47.90%, 750=31.27%, 1000=0.77% 00:10:44.282 lat (msec) : 2=0.16% 00:10:44.282 cpu : usr=2.00%, sys=4.40%, ctx=2488, majf=0, minf=1 00:10:44.282 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.282 issued rwts: total=1024,1458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.282 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.282 job1: (groupid=0, jobs=1): err= 0: pid=2963552: Wed Jul 24 21:35:51 2024 00:10:44.282 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:44.282 slat (nsec): min=6306, max=19453, avg=7203.60, stdev=805.69 00:10:44.282 clat (usec): min=425, max=805, avg=525.05, stdev=32.98 00:10:44.282 lat (usec): min=432, max=815, avg=532.25, stdev=33.02 00:10:44.282 clat percentiles (usec): 00:10:44.282 | 1.00th=[ 449], 5.00th=[ 486], 10.00th=[ 498], 20.00th=[ 506], 00:10:44.282 | 30.00th=[ 515], 40.00th=[ 519], 50.00th=[ 523], 60.00th=[ 529], 00:10:44.282 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 545], 95.00th=[ 570], 00:10:44.282 | 99.00th=[ 652], 99.50th=[ 709], 99.90th=[ 783], 99.95th=[ 807], 00:10:44.282 | 99.99th=[ 807] 00:10:44.282 write: IOPS=1407, BW=5630KiB/s (5765kB/s)(5636KiB/1001msec); 0 zone resets 00:10:44.282 slat (usec): min=6, max=1522, avg=11.42, stdev=42.51 00:10:44.282 clat (usec): min=223, max=1417, avg=306.82, stdev=111.40 00:10:44.282 lat (usec): min=232, max=2238, avg=318.23, stdev=124.34 00:10:44.282 clat percentiles (usec): 00:10:44.282 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 239], 00:10:44.282 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 277], 00:10:44.282 | 70.00th=[ 314], 80.00th=[ 351], 90.00th=[ 420], 95.00th=[ 586], 00:10:44.282 | 99.00th=[ 717], 99.50th=[ 799], 99.90th=[ 1270], 99.95th=[ 1418], 00:10:44.282 | 99.99th=[ 1418] 00:10:44.282 bw ( KiB/s): min= 4728, max= 4728, per=22.84%, avg=4728.00, stdev= 0.00, samples=1 00:10:44.282 iops : min= 1182, max= 1182, avg=1182.00, stdev= 0.00, samples=1 00:10:44.282 lat (usec) : 250=22.73%, 500=35.31%, 750=41.39%, 1000=0.49% 00:10:44.282 lat (msec) : 2=0.08% 00:10:44.282 cpu : usr=1.10%, sys=2.30%, ctx=2437, majf=0, minf=2 00:10:44.282 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.282 issued rwts: total=1024,1409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.282 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.282 job2: (groupid=0, jobs=1): err= 0: pid=2963564: Wed Jul 24 21:35:51 2024 00:10:44.282 read: IOPS=761, BW=3045KiB/s (3118kB/s)(3048KiB/1001msec) 00:10:44.282 slat (nsec): min=6478, max=22800, avg=9704.44, stdev=1379.14 00:10:44.282 clat (usec): min=454, max=41745, avg=873.72, stdev=2934.65 00:10:44.282 lat (usec): min=464, max=41755, avg=883.42, stdev=2934.56 00:10:44.282 clat percentiles (usec): 00:10:44.282 | 1.00th=[ 490], 5.00th=[ 529], 10.00th=[ 578], 20.00th=[ 627], 00:10:44.282 | 30.00th=[ 644], 40.00th=[ 652], 50.00th=[ 668], 60.00th=[ 676], 00:10:44.282 | 70.00th=[ 685], 80.00th=[ 693], 90.00th=[ 717], 95.00th=[ 750], 00:10:44.282 | 99.00th=[ 832], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:10:44.282 | 99.99th=[41681] 00:10:44.282 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:44.282 slat (nsec): min=11318, max=42430, avg=13477.89, stdev=2555.47 00:10:44.282 clat (usec): min=227, max=752, avg=300.01, stdev=74.90 00:10:44.282 lat (usec): min=239, max=765, avg=313.48, stdev=75.35 00:10:44.282 clat percentiles (usec): 00:10:44.282 | 1.00th=[ 231], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 249], 00:10:44.282 | 30.00th=[ 265], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:10:44.282 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 379], 95.00th=[ 469], 00:10:44.282 | 99.00th=[ 603], 99.50th=[ 611], 99.90th=[ 717], 99.95th=[ 750], 00:10:44.282 | 99.99th=[ 750] 00:10:44.282 bw ( KiB/s): min= 4096, max= 4096, per=19.78%, avg=4096.00, stdev= 0.00, samples=1 00:10:44.282 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:44.282 lat (usec) : 250=12.15%, 500=43.95%, 750=41.88%, 1000=1.74% 00:10:44.282 lat (msec) : 4=0.06%, 50=0.22% 00:10:44.282 cpu : usr=1.70%, sys=3.10%, ctx=1789, majf=0, minf=1 00:10:44.282 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.282 issued rwts: total=762,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.282 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.282 job3: (groupid=0, jobs=1): err= 0: pid=2963569: Wed Jul 24 21:35:51 2024 00:10:44.282 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:44.282 slat (nsec): min=6758, max=27708, avg=7443.32, stdev=835.31 00:10:44.282 clat (usec): min=342, max=881, avg=528.33, stdev=39.47 00:10:44.282 lat (usec): min=350, max=889, avg=535.78, stdev=39.46 00:10:44.282 clat percentiles (usec): 00:10:44.282 | 1.00th=[ 396], 5.00th=[ 469], 10.00th=[ 494], 20.00th=[ 510], 00:10:44.282 | 30.00th=[ 519], 40.00th=[ 523], 50.00th=[ 529], 60.00th=[ 537], 00:10:44.282 | 70.00th=[ 537], 80.00th=[ 545], 90.00th=[ 562], 95.00th=[ 578], 00:10:44.282 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 873], 99.95th=[ 881], 00:10:44.282 | 99.99th=[ 881] 00:10:44.282 write: IOPS=1288, BW=5155KiB/s (5279kB/s)(5160KiB/1001msec); 0 zone resets 00:10:44.282 slat (nsec): min=9727, max=40144, avg=11105.78, stdev=2240.69 00:10:44.282 clat (usec): min=221, max=3253, avg=333.11, stdev=162.07 00:10:44.282 lat (usec): min=232, max=3267, avg=344.22, stdev=162.91 00:10:44.282 clat percentiles (usec): 00:10:44.282 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 243], 00:10:44.283 | 30.00th=[ 255], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 289], 00:10:44.283 | 70.00th=[ 318], 80.00th=[ 392], 90.00th=[ 515], 95.00th=[ 603], 00:10:44.283 | 99.00th=[ 840], 99.50th=[ 1057], 99.90th=[ 1762], 99.95th=[ 3261], 00:10:44.283 | 99.99th=[ 3261] 00:10:44.283 bw ( KiB/s): min= 4096, max= 4096, per=19.78%, avg=4096.00, stdev= 0.00, samples=1 00:10:44.283 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:44.283 lat (usec) : 250=14.78%, 500=39.41%, 750=44.64%, 1000=0.82% 00:10:44.283 lat (msec) : 2=0.30%, 4=0.04% 00:10:44.283 cpu : usr=1.30%, sys=2.10%, ctx=2316, majf=0, minf=1 00:10:44.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.283 issued rwts: total=1024,1290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.283 00:10:44.283 Run status group 0 (all jobs): 00:10:44.283 READ: bw=15.0MiB/s (15.7MB/s), 3045KiB/s-4092KiB/s (3118kB/s-4190kB/s), io=15.0MiB (15.7MB), run=1001-1001msec 00:10:44.283 WRITE: bw=20.2MiB/s (21.2MB/s), 4092KiB/s-5826KiB/s (4190kB/s-5966kB/s), io=20.2MiB (21.2MB), run=1001-1001msec 00:10:44.283 00:10:44.283 Disk stats (read/write): 00:10:44.283 nvme0n1: ios=1033/1024, merge=0/0, ticks=640/291, in_queue=931, util=86.96% 00:10:44.283 nvme0n2: ios=1024/1024, merge=0/0, ticks=607/308, in_queue=915, util=90.96% 00:10:44.283 nvme0n3: ios=814/1024, merge=0/0, ticks=1054/290, in_queue=1344, util=94.68% 00:10:44.283 nvme0n4: ios=907/1024, merge=0/0, ticks=1351/360, in_queue=1711, util=94.32% 00:10:44.283 21:35:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:44.283 [global] 00:10:44.283 thread=1 00:10:44.283 invalidate=1 00:10:44.283 rw=randwrite 00:10:44.283 time_based=1 00:10:44.283 runtime=1 00:10:44.283 ioengine=libaio 00:10:44.283 direct=1 00:10:44.283 bs=4096 00:10:44.283 iodepth=1 00:10:44.283 norandommap=0 00:10:44.283 numjobs=1 00:10:44.283 00:10:44.283 verify_dump=1 00:10:44.283 verify_backlog=512 00:10:44.283 verify_state_save=0 00:10:44.283 do_verify=1 00:10:44.283 verify=crc32c-intel 00:10:44.283 [job0] 00:10:44.283 filename=/dev/nvme0n1 00:10:44.283 [job1] 00:10:44.283 filename=/dev/nvme0n2 00:10:44.283 [job2] 00:10:44.283 filename=/dev/nvme0n3 00:10:44.283 [job3] 00:10:44.283 filename=/dev/nvme0n4 00:10:44.283 Could not set queue depth (nvme0n1) 00:10:44.283 Could not set queue depth (nvme0n2) 00:10:44.283 Could not set queue depth (nvme0n3) 00:10:44.283 Could not set queue depth (nvme0n4) 00:10:44.283 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.283 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.283 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.283 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.283 fio-3.35 00:10:44.283 Starting 4 threads 00:10:45.663 00:10:45.663 job0: (groupid=0, jobs=1): err= 0: pid=2963980: Wed Jul 24 21:35:53 2024 00:10:45.663 read: IOPS=631, BW=2525KiB/s (2585kB/s)(2540KiB/1006msec) 00:10:45.663 slat (nsec): min=5272, max=27978, avg=8164.45, stdev=1710.65 00:10:45.663 clat (usec): min=374, max=43120, avg=1077.47, stdev=4921.90 00:10:45.663 lat (usec): min=383, max=43128, avg=1085.63, stdev=4922.33 00:10:45.663 clat percentiles (usec): 00:10:45.663 | 1.00th=[ 379], 5.00th=[ 392], 10.00th=[ 424], 20.00th=[ 469], 00:10:45.663 | 30.00th=[ 490], 40.00th=[ 498], 50.00th=[ 502], 60.00th=[ 506], 00:10:45.663 | 70.00th=[ 506], 80.00th=[ 515], 90.00th=[ 519], 95.00th=[ 529], 00:10:45.663 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:10:45.663 | 99.99th=[43254] 00:10:45.663 write: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec); 0 zone resets 00:10:45.663 slat (nsec): min=6248, max=40234, avg=10199.99, stdev=2991.36 00:10:45.663 clat (usec): min=212, max=719, avg=293.27, stdev=75.48 00:10:45.663 lat (usec): min=219, max=760, avg=303.47, stdev=77.91 00:10:45.663 clat percentiles (usec): 00:10:45.663 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:10:45.663 | 30.00th=[ 239], 40.00th=[ 262], 50.00th=[ 285], 60.00th=[ 318], 00:10:45.663 | 70.00th=[ 322], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 433], 00:10:45.663 | 99.00th=[ 603], 99.50th=[ 611], 99.90th=[ 619], 99.95th=[ 717], 00:10:45.663 | 99.99th=[ 717] 00:10:45.663 bw ( KiB/s): min= 1752, max= 6440, per=22.36%, avg=4096.00, stdev=3314.92, samples=2 00:10:45.664 iops : min= 438, max= 1610, avg=1024.00, stdev=828.73, samples=2 00:10:45.664 lat (usec) : 250=22.30%, 500=54.85%, 750=22.24% 00:10:45.664 lat (msec) : 2=0.06%, 50=0.54% 00:10:45.664 cpu : usr=0.40%, sys=1.99%, ctx=1661, majf=0, minf=1 00:10:45.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.664 issued rwts: total=635,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.664 job1: (groupid=0, jobs=1): err= 0: pid=2963993: Wed Jul 24 21:35:53 2024 00:10:45.664 read: IOPS=1056, BW=4228KiB/s (4329kB/s)(4232KiB/1001msec) 00:10:45.664 slat (nsec): min=7008, max=20583, avg=8445.41, stdev=982.55 00:10:45.664 clat (usec): min=343, max=969, avg=479.86, stdev=51.02 00:10:45.664 lat (usec): min=351, max=977, avg=488.30, stdev=51.01 00:10:45.664 clat percentiles (usec): 00:10:45.664 | 1.00th=[ 359], 5.00th=[ 383], 10.00th=[ 400], 20.00th=[ 449], 00:10:45.664 | 30.00th=[ 465], 40.00th=[ 478], 50.00th=[ 486], 60.00th=[ 494], 00:10:45.664 | 70.00th=[ 506], 80.00th=[ 515], 90.00th=[ 529], 95.00th=[ 537], 00:10:45.664 | 99.00th=[ 570], 99.50th=[ 627], 99.90th=[ 938], 99.95th=[ 971], 00:10:45.664 | 99.99th=[ 971] 00:10:45.664 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:45.664 slat (nsec): min=9361, max=38256, avg=11704.45, stdev=1594.89 00:10:45.664 clat (usec): min=213, max=816, avg=297.55, stdev=80.64 00:10:45.664 lat (usec): min=224, max=828, avg=309.25, stdev=81.04 00:10:45.664 clat percentiles (usec): 00:10:45.664 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 243], 00:10:45.664 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 310], 00:10:45.664 | 70.00th=[ 322], 80.00th=[ 326], 90.00th=[ 363], 95.00th=[ 445], 00:10:45.664 | 99.00th=[ 644], 99.50th=[ 652], 99.90th=[ 742], 99.95th=[ 816], 00:10:45.664 | 99.99th=[ 816] 00:10:45.664 bw ( KiB/s): min= 6360, max= 6360, per=34.71%, avg=6360.00, stdev= 0.00, samples=1 00:10:45.664 iops : min= 1590, max= 1590, avg=1590.00, stdev= 0.00, samples=1 00:10:45.664 lat (usec) : 250=15.54%, 500=67.89%, 750=16.42%, 1000=0.15% 00:10:45.664 cpu : usr=1.30%, sys=3.00%, ctx=2595, majf=0, minf=2 00:10:45.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.664 issued rwts: total=1058,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.664 job2: (groupid=0, jobs=1): err= 0: pid=2964010: Wed Jul 24 21:35:53 2024 00:10:45.664 read: IOPS=1028, BW=4116KiB/s (4215kB/s)(4120KiB/1001msec) 00:10:45.664 slat (nsec): min=6662, max=25651, avg=7473.40, stdev=778.03 00:10:45.664 clat (usec): min=329, max=965, avg=523.38, stdev=66.07 00:10:45.664 lat (usec): min=337, max=973, avg=530.86, stdev=66.08 00:10:45.664 clat percentiles (usec): 00:10:45.664 | 1.00th=[ 347], 5.00th=[ 375], 10.00th=[ 449], 20.00th=[ 494], 00:10:45.664 | 30.00th=[ 510], 40.00th=[ 523], 50.00th=[ 529], 60.00th=[ 545], 00:10:45.664 | 70.00th=[ 553], 80.00th=[ 562], 90.00th=[ 570], 95.00th=[ 578], 00:10:45.664 | 99.00th=[ 766], 99.50th=[ 799], 99.90th=[ 963], 99.95th=[ 963], 00:10:45.664 | 99.99th=[ 963] 00:10:45.664 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:45.664 slat (nsec): min=9535, max=40248, avg=10576.69, stdev=1405.72 00:10:45.664 clat (usec): min=219, max=934, avg=280.57, stdev=79.65 00:10:45.664 lat (usec): min=229, max=945, avg=291.14, stdev=79.86 00:10:45.664 clat percentiles (usec): 00:10:45.664 | 1.00th=[ 223], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 233], 00:10:45.664 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 269], 00:10:45.664 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 359], 95.00th=[ 404], 00:10:45.664 | 99.00th=[ 611], 99.50th=[ 734], 99.90th=[ 914], 99.95th=[ 938], 00:10:45.664 | 99.99th=[ 938] 00:10:45.664 bw ( KiB/s): min= 6008, max= 6008, per=32.79%, avg=6008.00, stdev= 0.00, samples=1 00:10:45.664 iops : min= 1502, max= 1502, avg=1502.00, stdev= 0.00, samples=1 00:10:45.664 lat (usec) : 250=27.08%, 500=40.72%, 750=31.49%, 1000=0.70% 00:10:45.664 cpu : usr=1.50%, sys=2.20%, ctx=2569, majf=0, minf=1 00:10:45.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.664 issued rwts: total=1030,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.664 job3: (groupid=0, jobs=1): err= 0: pid=2964016: Wed Jul 24 21:35:53 2024 00:10:45.664 read: IOPS=20, BW=83.6KiB/s (85.6kB/s)(84.0KiB/1005msec) 00:10:45.664 slat (nsec): min=9930, max=27590, avg=18251.86, stdev=6115.52 00:10:45.664 clat (usec): min=1047, max=43044, avg=40051.18, stdev=8941.35 00:10:45.664 lat (usec): min=1071, max=43056, avg=40069.43, stdev=8940.14 00:10:45.664 clat percentiles (usec): 00:10:45.664 | 1.00th=[ 1045], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:10:45.664 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:45.664 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:45.664 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:45.664 | 99.99th=[43254] 00:10:45.664 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:45.664 slat (nsec): min=9876, max=36535, avg=11388.70, stdev=1907.65 00:10:45.664 clat (usec): min=218, max=941, avg=302.72, stdev=107.38 00:10:45.664 lat (usec): min=232, max=953, avg=314.11, stdev=107.71 00:10:45.664 clat percentiles (usec): 00:10:45.664 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 245], 00:10:45.664 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:10:45.664 | 70.00th=[ 281], 80.00th=[ 318], 90.00th=[ 449], 95.00th=[ 603], 00:10:45.664 | 99.00th=[ 685], 99.50th=[ 725], 99.90th=[ 938], 99.95th=[ 938], 00:10:45.664 | 99.99th=[ 938] 00:10:45.664 bw ( KiB/s): min= 4096, max= 4096, per=22.36%, avg=4096.00, stdev= 0.00, samples=1 00:10:45.664 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:45.664 lat (usec) : 250=30.39%, 500=58.54%, 750=6.75%, 1000=0.38% 00:10:45.664 lat (msec) : 2=0.19%, 50=3.75% 00:10:45.664 cpu : usr=0.10%, sys=0.80%, ctx=534, majf=0, minf=1 00:10:45.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.664 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.664 00:10:45.664 Run status group 0 (all jobs): 00:10:45.664 READ: bw=10.7MiB/s (11.2MB/s), 83.6KiB/s-4228KiB/s (85.6kB/s-4329kB/s), io=10.7MiB (11.2MB), run=1001-1006msec 00:10:45.664 WRITE: bw=17.9MiB/s (18.8MB/s), 2038KiB/s-6138KiB/s (2087kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1006msec 00:10:45.664 00:10:45.664 Disk stats (read/write): 00:10:45.664 nvme0n1: ios=653/1024, merge=0/0, ticks=1354/290, in_queue=1644, util=86.07% 00:10:45.664 nvme0n2: ios=1046/1090, merge=0/0, ticks=1372/336, in_queue=1708, util=90.26% 00:10:45.664 nvme0n3: ios=1077/1056, merge=0/0, ticks=789/294, in_queue=1083, util=94.81% 00:10:45.664 nvme0n4: ios=73/512, merge=0/0, ticks=799/156, in_queue=955, util=95.60% 00:10:45.664 21:35:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:45.664 [global] 00:10:45.664 thread=1 00:10:45.664 invalidate=1 00:10:45.664 rw=write 00:10:45.664 time_based=1 00:10:45.664 runtime=1 00:10:45.664 ioengine=libaio 00:10:45.664 direct=1 00:10:45.664 bs=4096 00:10:45.664 iodepth=128 00:10:45.664 norandommap=0 00:10:45.664 numjobs=1 00:10:45.664 00:10:45.664 verify_dump=1 00:10:45.664 verify_backlog=512 00:10:45.664 verify_state_save=0 00:10:45.664 do_verify=1 00:10:45.664 verify=crc32c-intel 00:10:45.664 [job0] 00:10:45.664 filename=/dev/nvme0n1 00:10:45.664 [job1] 00:10:45.664 filename=/dev/nvme0n2 00:10:45.664 [job2] 00:10:45.664 filename=/dev/nvme0n3 00:10:45.664 [job3] 00:10:45.664 filename=/dev/nvme0n4 00:10:45.664 Could not set queue depth (nvme0n1) 00:10:45.664 Could not set queue depth (nvme0n2) 00:10:45.664 Could not set queue depth (nvme0n3) 00:10:45.664 Could not set queue depth (nvme0n4) 00:10:45.925 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.925 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.925 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.925 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.925 fio-3.35 00:10:45.925 Starting 4 threads 00:10:47.355 00:10:47.355 job0: (groupid=0, jobs=1): err= 0: pid=2964443: Wed Jul 24 21:35:55 2024 00:10:47.355 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:10:47.355 slat (nsec): min=1063, max=22350k, avg=79349.02, stdev=716716.44 00:10:47.355 clat (usec): min=2707, max=51224, avg=13278.72, stdev=6937.51 00:10:47.355 lat (usec): min=2709, max=51239, avg=13358.07, stdev=6971.48 00:10:47.355 clat percentiles (usec): 00:10:47.355 | 1.00th=[ 3982], 5.00th=[ 5997], 10.00th=[ 6980], 20.00th=[ 8586], 00:10:47.355 | 30.00th=[ 9110], 40.00th=[10028], 50.00th=[11600], 60.00th=[12649], 00:10:47.355 | 70.00th=[14353], 80.00th=[17433], 90.00th=[21365], 95.00th=[28705], 00:10:47.355 | 99.00th=[39584], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:47.355 | 99.99th=[51119] 00:10:47.355 write: IOPS=5009, BW=19.6MiB/s (20.5MB/s)(19.7MiB/1008msec); 0 zone resets 00:10:47.355 slat (nsec): min=1881, max=19396k, avg=95923.24, stdev=755956.39 00:10:47.355 clat (usec): min=1162, max=49135, avg=13216.59, stdev=6362.57 00:10:47.355 lat (usec): min=1172, max=49159, avg=13312.52, stdev=6415.32 00:10:47.355 clat percentiles (usec): 00:10:47.355 | 1.00th=[ 3097], 5.00th=[ 5407], 10.00th=[ 6652], 20.00th=[ 8029], 00:10:47.355 | 30.00th=[ 9372], 40.00th=[10552], 50.00th=[11469], 60.00th=[13304], 00:10:47.355 | 70.00th=[15533], 80.00th=[17433], 90.00th=[22414], 95.00th=[25560], 00:10:47.355 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[41157], 00:10:47.355 | 99.99th=[49021] 00:10:47.355 bw ( KiB/s): min=18896, max=20439, per=28.33%, avg=19667.50, stdev=1091.07, samples=2 00:10:47.355 iops : min= 4724, max= 5109, avg=4916.50, stdev=272.24, samples=2 00:10:47.355 lat (msec) : 2=0.05%, 4=2.00%, 10=34.21%, 20=50.52%, 50=13.21% 00:10:47.355 lat (msec) : 100=0.01% 00:10:47.355 cpu : usr=2.68%, sys=4.27%, ctx=536, majf=0, minf=1 00:10:47.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:47.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.355 issued rwts: total=4608,5050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.355 job1: (groupid=0, jobs=1): err= 0: pid=2964456: Wed Jul 24 21:35:55 2024 00:10:47.355 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:10:47.355 slat (nsec): min=1049, max=18685k, avg=104243.35, stdev=726789.73 00:10:47.355 clat (usec): min=5249, max=59731, avg=13275.75, stdev=7320.78 00:10:47.355 lat (usec): min=5256, max=59760, avg=13380.00, stdev=7378.10 00:10:47.355 clat percentiles (usec): 00:10:47.355 | 1.00th=[ 6194], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9634], 00:10:47.355 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11863], 00:10:47.355 | 70.00th=[12518], 80.00th=[13960], 90.00th=[17433], 95.00th=[33817], 00:10:47.355 | 99.00th=[44303], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:10:47.355 | 99.99th=[59507] 00:10:47.355 write: IOPS=4779, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1005msec); 0 zone resets 00:10:47.355 slat (nsec): min=1893, max=11390k, avg=104540.67, stdev=565581.34 00:10:47.355 clat (usec): min=3580, max=43973, avg=13688.49, stdev=5526.35 00:10:47.355 lat (usec): min=3584, max=43978, avg=13793.03, stdev=5551.25 00:10:47.355 clat percentiles (usec): 00:10:47.355 | 1.00th=[ 5669], 5.00th=[ 7701], 10.00th=[ 8848], 20.00th=[10028], 00:10:47.355 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11863], 60.00th=[12780], 00:10:47.355 | 70.00th=[14484], 80.00th=[17433], 90.00th=[21365], 95.00th=[25560], 00:10:47.355 | 99.00th=[33424], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:10:47.355 | 99.99th=[43779] 00:10:47.355 bw ( KiB/s): min=16928, max=20439, per=26.92%, avg=18683.50, stdev=2482.65, samples=2 00:10:47.355 iops : min= 4232, max= 5109, avg=4670.50, stdev=620.13, samples=2 00:10:47.355 lat (msec) : 4=0.11%, 10=22.27%, 20=67.58%, 50=9.65%, 100=0.39% 00:10:47.356 cpu : usr=2.39%, sys=3.19%, ctx=617, majf=0, minf=1 00:10:47.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:47.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.356 issued rwts: total=4608,4803,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.356 job2: (groupid=0, jobs=1): err= 0: pid=2964475: Wed Jul 24 21:35:55 2024 00:10:47.356 read: IOPS=3809, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1005msec) 00:10:47.356 slat (nsec): min=1072, max=27236k, avg=130098.72, stdev=999057.30 00:10:47.356 clat (usec): min=894, max=68531, avg=15206.03, stdev=7434.23 00:10:47.356 lat (usec): min=4581, max=68536, avg=15336.13, stdev=7536.49 00:10:47.356 clat percentiles (usec): 00:10:47.356 | 1.00th=[ 5014], 5.00th=[ 8225], 10.00th=[ 9634], 20.00th=[11600], 00:10:47.356 | 30.00th=[12387], 40.00th=[13173], 50.00th=[13566], 60.00th=[13960], 00:10:47.356 | 70.00th=[14353], 80.00th=[16581], 90.00th=[20055], 95.00th=[36963], 00:10:47.356 | 99.00th=[47449], 99.50th=[47449], 99.90th=[56361], 99.95th=[61080], 00:10:47.356 | 99.99th=[68682] 00:10:47.356 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:10:47.356 slat (nsec): min=1914, max=22773k, avg=119961.16, stdev=819975.01 00:10:47.356 clat (usec): min=3746, max=61509, avg=16702.41, stdev=8793.30 00:10:47.356 lat (usec): min=3750, max=61513, avg=16822.37, stdev=8824.48 00:10:47.356 clat percentiles (usec): 00:10:47.356 | 1.00th=[ 6783], 5.00th=[ 7635], 10.00th=[ 9765], 20.00th=[10945], 00:10:47.356 | 30.00th=[11731], 40.00th=[12387], 50.00th=[13698], 60.00th=[15401], 00:10:47.356 | 70.00th=[18220], 80.00th=[20317], 90.00th=[28705], 95.00th=[37487], 00:10:47.356 | 99.00th=[50070], 99.50th=[51119], 99.90th=[61604], 99.95th=[61604], 00:10:47.356 | 99.99th=[61604] 00:10:47.356 bw ( KiB/s): min=16319, max=16416, per=23.58%, avg=16367.50, stdev=68.59, samples=2 00:10:47.356 iops : min= 4079, max= 4104, avg=4091.50, stdev=17.68, samples=2 00:10:47.356 lat (usec) : 1000=0.01% 00:10:47.356 lat (msec) : 4=0.05%, 10=11.41%, 20=73.09%, 50=14.85%, 100=0.59% 00:10:47.356 cpu : usr=2.59%, sys=2.19%, ctx=426, majf=0, minf=1 00:10:47.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:47.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.356 issued rwts: total=3829,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.356 job3: (groupid=0, jobs=1): err= 0: pid=2964480: Wed Jul 24 21:35:55 2024 00:10:47.356 read: IOPS=3506, BW=13.7MiB/s (14.4MB/s)(14.0MiB/1022msec) 00:10:47.356 slat (nsec): min=1583, max=14593k, avg=114503.83, stdev=799275.02 00:10:47.356 clat (usec): min=822, max=34722, avg=16214.22, stdev=5207.84 00:10:47.356 lat (usec): min=828, max=34760, avg=16328.72, stdev=5248.71 00:10:47.356 clat percentiles (usec): 00:10:47.356 | 1.00th=[ 2704], 5.00th=[ 8586], 10.00th=[10421], 20.00th=[11731], 00:10:47.356 | 30.00th=[13435], 40.00th=[14877], 50.00th=[15795], 60.00th=[17695], 00:10:47.356 | 70.00th=[19268], 80.00th=[20317], 90.00th=[22676], 95.00th=[24511], 00:10:47.356 | 99.00th=[29492], 99.50th=[29754], 99.90th=[31065], 99.95th=[31851], 00:10:47.356 | 99.99th=[34866] 00:10:47.356 write: IOPS=3704, BW=14.5MiB/s (15.2MB/s)(14.8MiB/1022msec); 0 zone resets 00:10:47.356 slat (usec): min=2, max=40491, avg=142.97, stdev=952.41 00:10:47.356 clat (usec): min=3017, max=60655, avg=18883.75, stdev=9181.42 00:10:47.356 lat (usec): min=6248, max=60665, avg=19026.71, stdev=9198.79 00:10:47.356 clat percentiles (usec): 00:10:47.356 | 1.00th=[ 7046], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[12256], 00:10:47.356 | 30.00th=[13829], 40.00th=[16319], 50.00th=[18220], 60.00th=[19530], 00:10:47.356 | 70.00th=[20579], 80.00th=[22938], 90.00th=[27919], 95.00th=[30278], 00:10:47.356 | 99.00th=[60556], 99.50th=[60556], 99.90th=[60556], 99.95th=[60556], 00:10:47.356 | 99.99th=[60556] 00:10:47.356 bw ( KiB/s): min=12566, max=16680, per=21.07%, avg=14623.00, stdev=2909.04, samples=2 00:10:47.356 iops : min= 3141, max= 4170, avg=3655.50, stdev=727.61, samples=2 00:10:47.356 lat (usec) : 1000=0.04% 00:10:47.356 lat (msec) : 2=0.07%, 4=1.30%, 10=8.74%, 20=60.64%, 50=27.64% 00:10:47.356 lat (msec) : 100=1.57% 00:10:47.356 cpu : usr=2.74%, sys=4.51%, ctx=534, majf=0, minf=1 00:10:47.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:47.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.356 issued rwts: total=3584,3786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.356 00:10:47.356 Run status group 0 (all jobs): 00:10:47.356 READ: bw=63.6MiB/s (66.6MB/s), 13.7MiB/s-17.9MiB/s (14.4MB/s-18.8MB/s), io=65.0MiB (68.1MB), run=1005-1022msec 00:10:47.356 WRITE: bw=67.8MiB/s (71.1MB/s), 14.5MiB/s-19.6MiB/s (15.2MB/s-20.5MB/s), io=69.3MiB (72.6MB), run=1005-1022msec 00:10:47.356 00:10:47.356 Disk stats (read/write): 00:10:47.356 nvme0n1: ios=4027/4096, merge=0/0, ticks=49211/44562, in_queue=93773, util=86.87% 00:10:47.356 nvme0n2: ios=4118/4446, merge=0/0, ticks=23493/25280, in_queue=48773, util=89.14% 00:10:47.356 nvme0n3: ios=3125/3562, merge=0/0, ticks=19780/25008, in_queue=44788, util=94.59% 00:10:47.356 nvme0n4: ios=2871/3072, merge=0/0, ticks=47827/57743, in_queue=105570, util=95.39% 00:10:47.356 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:47.356 [global] 00:10:47.356 thread=1 00:10:47.356 invalidate=1 00:10:47.356 rw=randwrite 00:10:47.356 time_based=1 00:10:47.356 runtime=1 00:10:47.356 ioengine=libaio 00:10:47.356 direct=1 00:10:47.356 bs=4096 00:10:47.356 iodepth=128 00:10:47.356 norandommap=0 00:10:47.356 numjobs=1 00:10:47.356 00:10:47.356 verify_dump=1 00:10:47.356 verify_backlog=512 00:10:47.356 verify_state_save=0 00:10:47.356 do_verify=1 00:10:47.356 verify=crc32c-intel 00:10:47.356 [job0] 00:10:47.356 filename=/dev/nvme0n1 00:10:47.356 [job1] 00:10:47.356 filename=/dev/nvme0n2 00:10:47.356 [job2] 00:10:47.356 filename=/dev/nvme0n3 00:10:47.356 [job3] 00:10:47.356 filename=/dev/nvme0n4 00:10:47.356 Could not set queue depth (nvme0n1) 00:10:47.356 Could not set queue depth (nvme0n2) 00:10:47.356 Could not set queue depth (nvme0n3) 00:10:47.356 Could not set queue depth (nvme0n4) 00:10:47.615 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.615 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.615 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.615 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.615 fio-3.35 00:10:47.615 Starting 4 threads 00:10:48.993 00:10:48.993 job0: (groupid=0, jobs=1): err= 0: pid=2964874: Wed Jul 24 21:35:56 2024 00:10:48.993 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:10:48.993 slat (nsec): min=1050, max=21591k, avg=102327.59, stdev=744019.44 00:10:48.993 clat (usec): min=2590, max=61356, avg=13601.69, stdev=8180.74 00:10:48.993 lat (usec): min=2595, max=61359, avg=13704.01, stdev=8211.22 00:10:48.993 clat percentiles (usec): 00:10:48.993 | 1.00th=[ 4228], 5.00th=[ 6783], 10.00th=[ 7373], 20.00th=[ 9110], 00:10:48.993 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[11076], 60.00th=[11863], 00:10:48.993 | 70.00th=[14091], 80.00th=[16909], 90.00th=[21365], 95.00th=[26870], 00:10:48.993 | 99.00th=[48497], 99.50th=[61080], 99.90th=[61604], 99.95th=[61604], 00:10:48.993 | 99.99th=[61604] 00:10:48.993 write: IOPS=4632, BW=18.1MiB/s (19.0MB/s)(18.3MiB/1010msec); 0 zone resets 00:10:48.993 slat (nsec): min=1865, max=9480.7k, avg=105343.38, stdev=636705.43 00:10:48.993 clat (usec): min=1170, max=52007, avg=13999.18, stdev=7123.05 00:10:48.993 lat (usec): min=1180, max=52013, avg=14104.52, stdev=7153.65 00:10:48.993 clat percentiles (usec): 00:10:48.993 | 1.00th=[ 3916], 5.00th=[ 5669], 10.00th=[ 6718], 20.00th=[ 9241], 00:10:48.993 | 30.00th=[10290], 40.00th=[11600], 50.00th=[13304], 60.00th=[14746], 00:10:48.993 | 70.00th=[16319], 80.00th=[17695], 90.00th=[20317], 95.00th=[23725], 00:10:48.993 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51643], 99.95th=[52167], 00:10:48.993 | 99.99th=[52167] 00:10:48.993 bw ( KiB/s): min=12968, max=23896, per=26.96%, avg=18432.00, stdev=7727.26, samples=2 00:10:48.993 iops : min= 3242, max= 5974, avg=4608.00, stdev=1931.82, samples=2 00:10:48.993 lat (msec) : 2=0.02%, 4=0.93%, 10=27.03%, 20=59.90%, 50=11.11% 00:10:48.994 lat (msec) : 100=1.01% 00:10:48.994 cpu : usr=2.08%, sys=3.37%, ctx=578, majf=0, minf=1 00:10:48.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:48.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.994 issued rwts: total=4608,4679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.994 job1: (groupid=0, jobs=1): err= 0: pid=2964875: Wed Jul 24 21:35:56 2024 00:10:48.994 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:10:48.994 slat (nsec): min=1002, max=12101k, avg=84624.49, stdev=629584.09 00:10:48.994 clat (usec): min=4806, max=32991, avg=13512.92, stdev=4074.14 00:10:48.994 lat (usec): min=4813, max=33008, avg=13597.54, stdev=4094.23 00:10:48.994 clat percentiles (usec): 00:10:48.994 | 1.00th=[ 7439], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[10028], 00:10:48.994 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12387], 60.00th=[13566], 00:10:48.994 | 70.00th=[15139], 80.00th=[17433], 90.00th=[19268], 95.00th=[21103], 00:10:48.994 | 99.00th=[24773], 99.50th=[26608], 99.90th=[27395], 99.95th=[27657], 00:10:48.994 | 99.99th=[32900] 00:10:48.994 write: IOPS=5178, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1004msec); 0 zone resets 00:10:48.994 slat (nsec): min=1746, max=7101.1k, avg=77335.77, stdev=479821.60 00:10:48.994 clat (usec): min=1398, max=36446, avg=11215.96, stdev=4755.98 00:10:48.994 lat (usec): min=1439, max=36452, avg=11293.30, stdev=4760.98 00:10:48.994 clat percentiles (usec): 00:10:48.994 | 1.00th=[ 3228], 5.00th=[ 4752], 10.00th=[ 5735], 20.00th=[ 7111], 00:10:48.994 | 30.00th=[ 8586], 40.00th=[ 9896], 50.00th=[10814], 60.00th=[11863], 00:10:48.994 | 70.00th=[13042], 80.00th=[14353], 90.00th=[17171], 95.00th=[20055], 00:10:48.994 | 99.00th=[27132], 99.50th=[27919], 99.90th=[35914], 99.95th=[36439], 00:10:48.994 | 99.99th=[36439] 00:10:48.994 bw ( KiB/s): min=20440, max=20520, per=29.96%, avg=20480.00, stdev=56.57, samples=2 00:10:48.994 iops : min= 5110, max= 5130, avg=5120.00, stdev=14.14, samples=2 00:10:48.994 lat (msec) : 2=0.07%, 4=1.82%, 10=29.05%, 20=62.37%, 50=6.69% 00:10:48.994 cpu : usr=1.79%, sys=4.79%, ctx=651, majf=0, minf=1 00:10:48.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:48.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.994 issued rwts: total=5120,5199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.994 job2: (groupid=0, jobs=1): err= 0: pid=2964876: Wed Jul 24 21:35:56 2024 00:10:48.994 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:10:48.994 slat (nsec): min=1042, max=11205k, avg=113766.57, stdev=766944.17 00:10:48.994 clat (usec): min=2587, max=38227, avg=15746.41, stdev=7003.63 00:10:48.994 lat (usec): min=2599, max=38231, avg=15860.18, stdev=7029.58 00:10:48.994 clat percentiles (usec): 00:10:48.994 | 1.00th=[ 6390], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10814], 00:10:48.994 | 30.00th=[11600], 40.00th=[12387], 50.00th=[13435], 60.00th=[15139], 00:10:48.994 | 70.00th=[16450], 80.00th=[19006], 90.00th=[28181], 95.00th=[31851], 00:10:48.994 | 99.00th=[37487], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:10:48.994 | 99.99th=[38011] 00:10:48.994 write: IOPS=4356, BW=17.0MiB/s (17.8MB/s)(17.2MiB/1009msec); 0 zone resets 00:10:48.994 slat (nsec): min=1765, max=10055k, avg=109860.82, stdev=672787.48 00:10:48.994 clat (usec): min=1133, max=36804, avg=14458.96, stdev=5847.08 00:10:48.994 lat (usec): min=1143, max=36807, avg=14568.82, stdev=5879.97 00:10:48.994 clat percentiles (usec): 00:10:48.994 | 1.00th=[ 3294], 5.00th=[ 6521], 10.00th=[ 8029], 20.00th=[ 9241], 00:10:48.994 | 30.00th=[10683], 40.00th=[12125], 50.00th=[13698], 60.00th=[15533], 00:10:48.994 | 70.00th=[16581], 80.00th=[19006], 90.00th=[22152], 95.00th=[25297], 00:10:48.994 | 99.00th=[29754], 99.50th=[30802], 99.90th=[35914], 99.95th=[36963], 00:10:48.994 | 99.99th=[36963] 00:10:48.994 bw ( KiB/s): min=13664, max=20480, per=24.97%, avg=17072.00, stdev=4819.64, samples=2 00:10:48.994 iops : min= 3416, max= 5120, avg=4268.00, stdev=1204.91, samples=2 00:10:48.994 lat (msec) : 2=0.12%, 4=0.71%, 10=18.59%, 20=63.01%, 50=17.57% 00:10:48.994 cpu : usr=2.38%, sys=2.98%, ctx=505, majf=0, minf=1 00:10:48.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:48.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.994 issued rwts: total=4096,4396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.994 job3: (groupid=0, jobs=1): err= 0: pid=2964878: Wed Jul 24 21:35:56 2024 00:10:48.994 read: IOPS=2773, BW=10.8MiB/s (11.4MB/s)(11.0MiB/1015msec) 00:10:48.994 slat (nsec): min=1053, max=20201k, avg=175642.76, stdev=995348.82 00:10:48.994 clat (usec): min=1991, max=55419, avg=22449.29, stdev=7457.30 00:10:48.994 lat (usec): min=5609, max=60091, avg=22624.93, stdev=7492.91 00:10:48.994 clat percentiles (usec): 00:10:48.994 | 1.00th=[ 8291], 5.00th=[14222], 10.00th=[15795], 20.00th=[17171], 00:10:48.994 | 30.00th=[18220], 40.00th=[19006], 50.00th=[20579], 60.00th=[22152], 00:10:48.994 | 70.00th=[23462], 80.00th=[27657], 90.00th=[31851], 95.00th=[37487], 00:10:48.994 | 99.00th=[45351], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:10:48.994 | 99.99th=[55313] 00:10:48.994 write: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec); 0 zone resets 00:10:48.994 slat (nsec): min=1915, max=15177k, avg=163166.61, stdev=892216.73 00:10:48.994 clat (usec): min=1333, max=51828, avg=21162.76, stdev=9489.57 00:10:48.994 lat (usec): min=1346, max=51835, avg=21325.92, stdev=9545.12 00:10:48.994 clat percentiles (usec): 00:10:48.994 | 1.00th=[ 6587], 5.00th=[10683], 10.00th=[11731], 20.00th=[13304], 00:10:48.994 | 30.00th=[14746], 40.00th=[16057], 50.00th=[18744], 60.00th=[21103], 00:10:48.994 | 70.00th=[24511], 80.00th=[28705], 90.00th=[36439], 95.00th=[40633], 00:10:48.994 | 99.00th=[47449], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:10:48.994 | 99.99th=[51643] 00:10:48.994 bw ( KiB/s): min=11360, max=13216, per=17.98%, avg=12288.00, stdev=1312.39, samples=2 00:10:48.994 iops : min= 2840, max= 3304, avg=3072.00, stdev=328.10, samples=2 00:10:48.994 lat (msec) : 2=0.10%, 4=0.10%, 10=2.63%, 20=48.38%, 50=48.12% 00:10:48.994 lat (msec) : 100=0.66% 00:10:48.994 cpu : usr=1.78%, sys=1.97%, ctx=434, majf=0, minf=1 00:10:48.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:48.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.994 issued rwts: total=2815,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.994 00:10:48.994 Run status group 0 (all jobs): 00:10:48.994 READ: bw=64.0MiB/s (67.1MB/s), 10.8MiB/s-19.9MiB/s (11.4MB/s-20.9MB/s), io=65.0MiB (68.2MB), run=1004-1015msec 00:10:48.994 WRITE: bw=66.8MiB/s (70.0MB/s), 11.8MiB/s-20.2MiB/s (12.4MB/s-21.2MB/s), io=67.8MiB (71.0MB), run=1004-1015msec 00:10:48.994 00:10:48.994 Disk stats (read/write): 00:10:48.994 nvme0n1: ios=3859/4096, merge=0/0, ticks=40513/38541, in_queue=79054, util=95.69% 00:10:48.994 nvme0n2: ios=4248/4608, merge=0/0, ticks=51916/46267, in_queue=98183, util=90.05% 00:10:48.994 nvme0n3: ios=3641/3712, merge=0/0, ticks=39471/33770, in_queue=73241, util=92.41% 00:10:48.994 nvme0n4: ios=2488/2560, merge=0/0, ticks=18874/17765, in_queue=36639, util=98.74% 00:10:48.994 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:48.994 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2965072 00:10:48.994 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:48.994 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:48.994 [global] 00:10:48.994 thread=1 00:10:48.994 invalidate=1 00:10:48.994 rw=read 00:10:48.994 time_based=1 00:10:48.994 runtime=10 00:10:48.994 ioengine=libaio 00:10:48.994 direct=1 00:10:48.994 bs=4096 00:10:48.994 iodepth=1 00:10:48.994 norandommap=1 00:10:48.994 numjobs=1 00:10:48.994 00:10:48.994 [job0] 00:10:48.994 filename=/dev/nvme0n1 00:10:48.994 [job1] 00:10:48.994 filename=/dev/nvme0n2 00:10:48.994 [job2] 00:10:48.994 filename=/dev/nvme0n3 00:10:48.994 [job3] 00:10:48.994 filename=/dev/nvme0n4 00:10:48.994 Could not set queue depth (nvme0n1) 00:10:48.994 Could not set queue depth (nvme0n2) 00:10:48.994 Could not set queue depth (nvme0n3) 00:10:48.994 Could not set queue depth (nvme0n4) 00:10:48.994 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.994 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.994 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.994 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.994 fio-3.35 00:10:48.994 Starting 4 threads 00:10:52.278 21:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:52.278 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=266240, buflen=4096 00:10:52.278 fio: pid=2965250, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:52.278 21:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:52.278 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=22220800, buflen=4096 00:10:52.278 fio: pid=2965249, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:52.278 21:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.278 21:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:52.278 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=716800, buflen=4096 00:10:52.278 fio: pid=2965247, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:52.278 21:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.279 21:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:52.537 21:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.537 21:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:52.537 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=323584, buflen=4096 00:10:52.537 fio: pid=2965248, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:52.537 00:10:52.537 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2965247: Wed Jul 24 21:36:00 2024 00:10:52.537 read: IOPS=57, BW=228KiB/s (233kB/s)(700KiB/3072msec) 00:10:52.537 slat (usec): min=5, max=13497, avg=127.51, stdev=1127.87 00:10:52.537 clat (usec): min=358, max=43227, avg=17414.28, stdev=20438.10 00:10:52.537 lat (usec): min=366, max=56725, avg=17504.94, stdev=20568.52 00:10:52.537 clat percentiles (usec): 00:10:52.537 | 1.00th=[ 363], 5.00th=[ 371], 10.00th=[ 392], 20.00th=[ 449], 00:10:52.537 | 30.00th=[ 474], 40.00th=[ 515], 50.00th=[ 529], 60.00th=[40633], 00:10:52.537 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:52.537 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:52.537 | 99.99th=[43254] 00:10:52.537 bw ( KiB/s): min= 96, max= 96, per=1.38%, avg=96.00, stdev= 0.00, samples=5 00:10:52.537 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:10:52.537 lat (usec) : 500=32.39%, 750=25.00% 00:10:52.537 lat (msec) : 2=1.14%, 20=0.57%, 50=40.34% 00:10:52.537 cpu : usr=0.00%, sys=0.16%, ctx=179, majf=0, minf=1 00:10:52.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.537 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.537 issued rwts: total=176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.537 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2965248: Wed Jul 24 21:36:00 2024 00:10:52.537 read: IOPS=24, BW=95.8KiB/s (98.1kB/s)(316KiB/3297msec) 00:10:52.537 slat (usec): min=14, max=14585, avg=224.02, stdev=1634.24 00:10:52.537 clat (usec): min=1022, max=42980, avg=41488.96, stdev=4617.44 00:10:52.537 lat (usec): min=1058, max=56082, avg=41715.51, stdev=4900.24 00:10:52.537 clat percentiles (usec): 00:10:52.537 | 1.00th=[ 1020], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:10:52.537 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:52.537 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:10:52.537 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:52.537 | 99.99th=[42730] 00:10:52.537 bw ( KiB/s): min= 89, max= 96, per=1.35%, avg=94.83, stdev= 2.86, samples=6 00:10:52.537 iops : min= 22, max= 24, avg=23.67, stdev= 0.82, samples=6 00:10:52.537 lat (msec) : 2=1.25%, 50=97.50% 00:10:52.537 cpu : usr=0.12%, sys=0.00%, ctx=84, majf=0, minf=1 00:10:52.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.537 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.537 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.537 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2965249: Wed Jul 24 21:36:00 2024 00:10:52.537 read: IOPS=1899, BW=7598KiB/s (7780kB/s)(21.2MiB/2856msec) 00:10:52.537 slat (usec): min=2, max=11483, avg=12.55, stdev=219.25 00:10:52.537 clat (usec): min=359, max=2045, avg=511.82, stdev=63.59 00:10:52.537 lat (usec): min=367, max=12054, avg=524.37, stdev=229.14 00:10:52.537 clat percentiles (usec): 00:10:52.537 | 1.00th=[ 449], 5.00th=[ 465], 10.00th=[ 474], 20.00th=[ 486], 00:10:52.537 | 30.00th=[ 490], 40.00th=[ 494], 50.00th=[ 498], 60.00th=[ 506], 00:10:52.537 | 70.00th=[ 510], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 611], 00:10:52.537 | 99.00th=[ 717], 99.50th=[ 734], 99.90th=[ 1139], 99.95th=[ 1778], 00:10:52.537 | 99.99th=[ 2040] 00:10:52.537 bw ( KiB/s): min= 7360, max= 7936, per=100.00%, avg=7635.20, stdev=253.40, samples=5 00:10:52.537 iops : min= 1840, max= 1984, avg=1908.80, stdev=63.35, samples=5 00:10:52.537 lat (usec) : 500=51.71%, 750=47.88%, 1000=0.26% 00:10:52.537 lat (msec) : 2=0.09%, 4=0.04% 00:10:52.537 cpu : usr=1.23%, sys=3.01%, ctx=5430, majf=0, minf=1 00:10:52.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.537 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.537 issued rwts: total=5426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.537 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2965250: Wed Jul 24 21:36:00 2024 00:10:52.537 read: IOPS=24, BW=96.3KiB/s (98.6kB/s)(260KiB/2699msec) 00:10:52.537 slat (nsec): min=12712, max=29867, avg=24504.89, stdev=2180.30 00:10:52.537 clat (usec): min=932, max=44113, avg=41454.19, stdev=5120.11 00:10:52.537 lat (usec): min=962, max=44142, avg=41478.69, stdev=5119.44 00:10:52.537 clat percentiles (usec): 00:10:52.538 | 1.00th=[ 930], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:10:52.538 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:52.538 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:10:52.538 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:10:52.538 | 99.99th=[44303] 00:10:52.538 bw ( KiB/s): min= 96, max= 96, per=1.38%, avg=96.00, stdev= 0.00, samples=5 00:10:52.538 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:10:52.538 lat (usec) : 1000=1.52% 00:10:52.538 lat (msec) : 50=96.97% 00:10:52.538 cpu : usr=0.15%, sys=0.00%, ctx=68, majf=0, minf=2 00:10:52.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.538 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.538 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.538 00:10:52.538 Run status group 0 (all jobs): 00:10:52.538 READ: bw=6969KiB/s (7136kB/s), 95.8KiB/s-7598KiB/s (98.1kB/s-7780kB/s), io=22.4MiB (23.5MB), run=2699-3297msec 00:10:52.538 00:10:52.538 Disk stats (read/write): 00:10:52.538 nvme0n1: ios=168/0, merge=0/0, ticks=3590/0, in_queue=3590, util=99.33% 00:10:52.538 nvme0n2: ios=73/0, merge=0/0, ticks=3028/0, in_queue=3028, util=95.20% 00:10:52.538 nvme0n3: ios=5383/0, merge=0/0, ticks=2708/0, in_queue=2708, util=95.65% 00:10:52.538 nvme0n4: ios=110/0, merge=0/0, ticks=3307/0, in_queue=3307, util=99.78% 00:10:52.796 21:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.796 21:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:53.054 21:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.054 21:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:53.054 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.054 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:53.311 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.311 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:53.570 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:53.570 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2965072 00:10:53.570 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:53.570 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:53.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.570 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:53.570 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1217 -- # local i=0 00:10:53.570 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:53.570 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.570 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:53.570 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.570 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # return 0 00:10:53.570 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:53.570 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:53.570 nvmf hotplug test: fio failed as expected 00:10:53.570 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:53.829 rmmod nvme_tcp 00:10:53.829 rmmod nvme_fabrics 00:10:53.829 rmmod nvme_keyring 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2962178 ']' 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2962178 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2962178 ']' 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2962178 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:53.829 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2962178 00:10:54.125 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:54.125 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:54.125 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2962178' 00:10:54.125 killing process with pid 2962178 00:10:54.125 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2962178 00:10:54.125 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2962178 00:10:54.125 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:54.125 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:54.125 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:54.125 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:54.125 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:54.125 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.125 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.125 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:56.690 00:10:56.690 real 0m26.289s 00:10:56.690 user 1m46.590s 00:10:56.690 sys 0m7.425s 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.690 ************************************ 00:10:56.690 END TEST nvmf_fio_target 00:10:56.690 ************************************ 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:56.690 ************************************ 00:10:56.690 START TEST nvmf_bdevio 00:10:56.690 ************************************ 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:56.690 * Looking for test storage... 00:10:56.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.690 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:56.691 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:01.963 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:01.963 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:01.963 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:01.964 Found net devices under 0000:86:00.0: cvl_0_0 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:01.964 Found net devices under 0000:86:00.1: cvl_0_1 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.964 21:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:01.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:11:01.964 00:11:01.964 --- 10.0.0.2 ping statistics --- 00:11:01.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.964 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:11:01.964 00:11:01.964 --- 10.0.0.1 ping statistics --- 00:11:01.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.964 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2970008 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2970008 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2970008 ']' 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:01.964 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.221 [2024-07-24 21:36:10.087469] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:11:02.221 [2024-07-24 21:36:10.087513] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.221 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.221 [2024-07-24 21:36:10.147366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.221 [2024-07-24 21:36:10.227773] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.221 [2024-07-24 21:36:10.227809] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.221 [2024-07-24 21:36:10.227816] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.221 [2024-07-24 21:36:10.227822] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.221 [2024-07-24 21:36:10.227827] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.221 [2024-07-24 21:36:10.227936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:02.221 [2024-07-24 21:36:10.228037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:02.221 [2024-07-24 21:36:10.228146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.221 [2024-07-24 21:36:10.228147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:03.150 [2024-07-24 21:36:10.951446] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:03.150 Malloc0 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.150 21:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:03.150 [2024-07-24 21:36:11.002646] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.150 21:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.150 21:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:03.150 21:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:03.150 21:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:03.150 21:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:03.150 21:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:03.150 21:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:03.150 { 00:11:03.150 "params": { 00:11:03.150 "name": "Nvme$subsystem", 00:11:03.150 "trtype": "$TEST_TRANSPORT", 00:11:03.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:03.150 "adrfam": "ipv4", 00:11:03.150 "trsvcid": "$NVMF_PORT", 00:11:03.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:03.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:03.150 "hdgst": ${hdgst:-false}, 00:11:03.150 "ddgst": ${ddgst:-false} 00:11:03.150 }, 00:11:03.150 "method": "bdev_nvme_attach_controller" 00:11:03.150 } 00:11:03.150 EOF 00:11:03.150 )") 00:11:03.150 21:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:03.150 21:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:03.150 21:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:03.150 21:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:03.150 "params": { 00:11:03.150 "name": "Nvme1", 00:11:03.150 "trtype": "tcp", 00:11:03.150 "traddr": "10.0.0.2", 00:11:03.150 "adrfam": "ipv4", 00:11:03.150 "trsvcid": "4420", 00:11:03.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:03.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:03.150 "hdgst": false, 00:11:03.150 "ddgst": false 00:11:03.150 }, 00:11:03.150 "method": "bdev_nvme_attach_controller" 00:11:03.150 }' 00:11:03.150 [2024-07-24 21:36:11.051662] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:11:03.150 [2024-07-24 21:36:11.051708] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2970269 ] 00:11:03.150 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.150 [2024-07-24 21:36:11.106443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:03.150 [2024-07-24 21:36:11.181510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.150 [2024-07-24 21:36:11.181606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.150 [2024-07-24 21:36:11.181608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.406 I/O targets: 00:11:03.406 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:03.406 00:11:03.406 00:11:03.406 CUnit - A unit testing framework for C - Version 2.1-3 00:11:03.406 http://cunit.sourceforge.net/ 00:11:03.406 00:11:03.406 00:11:03.406 Suite: bdevio tests on: Nvme1n1 00:11:03.406 Test: blockdev write read block ...passed 00:11:03.663 Test: blockdev write zeroes read block ...passed 00:11:03.663 Test: blockdev write zeroes read no split ...passed 00:11:03.663 Test: blockdev write zeroes read split ...passed 00:11:03.663 Test: blockdev write zeroes read split partial ...passed 00:11:03.663 Test: blockdev reset ...[2024-07-24 21:36:11.707971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:03.663 [2024-07-24 21:36:11.708037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f066d0 (9): Bad file descriptor 00:11:03.663 [2024-07-24 21:36:11.721363] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:03.663 passed 00:11:03.663 Test: blockdev write read 8 blocks ...passed 00:11:03.663 Test: blockdev write read size > 128k ...passed 00:11:03.663 Test: blockdev write read invalid size ...passed 00:11:03.919 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.919 Test: blockdev write read max offset ...passed 00:11:03.919 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.919 Test: blockdev writev readv 8 blocks ...passed 00:11:03.919 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.919 Test: blockdev writev readv block ...passed 00:11:03.919 Test: blockdev writev readv size > 128k ...passed 00:11:03.919 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.919 Test: blockdev comparev and writev ...[2024-07-24 21:36:11.955254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.919 [2024-07-24 21:36:11.955282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:03.919 [2024-07-24 21:36:11.955296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.919 [2024-07-24 21:36:11.955303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:03.920 [2024-07-24 21:36:11.955859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.920 [2024-07-24 21:36:11.955870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:03.920 [2024-07-24 21:36:11.955882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.920 [2024-07-24 21:36:11.955890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:03.920 [2024-07-24 21:36:11.956388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.920 [2024-07-24 21:36:11.956399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:03.920 [2024-07-24 21:36:11.956410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.920 [2024-07-24 21:36:11.956418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:03.920 [2024-07-24 21:36:11.956993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.920 [2024-07-24 21:36:11.957005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:03.920 [2024-07-24 21:36:11.957016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.920 [2024-07-24 21:36:11.957023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:03.920 passed 00:11:04.177 Test: blockdev nvme passthru rw ...passed 00:11:04.177 Test: blockdev nvme passthru vendor specific ...[2024-07-24 21:36:12.040895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:04.177 [2024-07-24 21:36:12.040909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:04.177 [2024-07-24 21:36:12.041268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:04.177 [2024-07-24 21:36:12.041279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:04.177 [2024-07-24 21:36:12.041635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:04.177 [2024-07-24 21:36:12.041645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:04.177 [2024-07-24 21:36:12.042001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:04.177 [2024-07-24 21:36:12.042011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:04.177 passed 00:11:04.177 Test: blockdev nvme admin passthru ...passed 00:11:04.177 Test: blockdev copy ...passed 00:11:04.177 00:11:04.177 Run Summary: Type Total Ran Passed Failed Inactive 00:11:04.177 suites 1 1 n/a 0 0 00:11:04.177 tests 23 23 23 0 0 00:11:04.177 asserts 152 152 152 0 n/a 00:11:04.177 00:11:04.177 Elapsed time = 1.275 seconds 00:11:04.177 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.177 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.177 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.177 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.177 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:04.177 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:04.177 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:04.177 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:04.177 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:04.177 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:04.177 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:04.177 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:04.177 rmmod nvme_tcp 00:11:04.435 rmmod nvme_fabrics 00:11:04.435 rmmod nvme_keyring 00:11:04.435 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:04.436 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:04.436 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:04.436 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2970008 ']' 00:11:04.436 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2970008 00:11:04.436 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2970008 ']' 00:11:04.436 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2970008 00:11:04.436 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:11:04.436 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:04.436 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2970008 00:11:04.436 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:04.436 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:04.436 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2970008' 00:11:04.436 killing process with pid 2970008 00:11:04.436 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2970008 00:11:04.436 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2970008 00:11:04.694 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:04.694 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:04.694 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:04.694 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.694 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:04.694 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.694 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.694 21:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.598 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:06.598 00:11:06.598 real 0m10.360s 00:11:06.598 user 0m13.437s 00:11:06.598 sys 0m4.734s 00:11:06.598 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.598 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.598 ************************************ 00:11:06.598 END TEST nvmf_bdevio 00:11:06.598 ************************************ 00:11:06.598 21:36:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:06.598 00:11:06.598 real 4m33.616s 00:11:06.598 user 10m33.136s 00:11:06.598 sys 1m31.424s 00:11:06.598 21:36:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.598 21:36:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:06.598 ************************************ 00:11:06.598 END TEST nvmf_target_core 00:11:06.598 ************************************ 00:11:06.857 21:36:14 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:06.857 21:36:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:06.857 21:36:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.857 21:36:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:06.857 ************************************ 00:11:06.857 START TEST nvmf_target_extra 00:11:06.857 ************************************ 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:06.857 * Looking for test storage... 00:11:06.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.857 21:36:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:06.858 ************************************ 00:11:06.858 START TEST nvmf_example 00:11:06.858 ************************************ 00:11:06.858 21:36:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:07.116 * Looking for test storage... 00:11:07.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.116 21:36:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.116 21:36:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:07.116 21:36:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.116 21:36:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.116 21:36:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.116 21:36:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.116 21:36:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.116 21:36:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.116 21:36:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.116 21:36:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.116 21:36:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.116 21:36:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.116 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:07.117 21:36:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.393 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.393 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:12.394 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:12.394 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:12.394 Found net devices under 0000:86:00.0: cvl_0_0 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:12.394 Found net devices under 0000:86:00.1: cvl_0_1 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.394 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:12.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:11:12.655 00:11:12.655 --- 10.0.0.2 ping statistics --- 00:11:12.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.655 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:11:12.655 00:11:12.655 --- 10.0.0.1 ping statistics --- 00:11:12.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.655 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2974063 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2974063 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2974063 ']' 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.655 21:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.655 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:13.595 21:36:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:13.595 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.849 Initializing NVMe Controllers 00:11:25.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:25.849 Initialization complete. Launching workers. 00:11:25.849 ======================================================== 00:11:25.849 Latency(us) 00:11:25.849 Device Information : IOPS MiB/s Average min max 00:11:25.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13783.04 53.84 4646.34 716.49 16311.91 00:11:25.849 ======================================================== 00:11:25.849 Total : 13783.04 53.84 4646.34 716.49 16311.91 00:11:25.849 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.849 rmmod nvme_tcp 00:11:25.849 rmmod nvme_fabrics 00:11:25.849 rmmod nvme_keyring 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2974063 ']' 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2974063 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2974063 ']' 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2974063 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2974063 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2974063' 00:11:25.849 killing process with pid 2974063 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 2974063 00:11:25.849 21:36:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 2974063 00:11:25.849 nvmf threads initialize successfully 00:11:25.849 bdev subsystem init successfully 00:11:25.849 created a nvmf target service 00:11:25.849 create targets's poll groups done 00:11:25.849 all subsystems of target started 00:11:25.849 nvmf target is running 00:11:25.849 all subsystems of target stopped 00:11:25.849 destroy targets's poll groups done 00:11:25.849 destroyed the nvmf target service 00:11:25.849 bdev subsystem finish successfully 00:11:25.849 nvmf threads destroy successfully 00:11:25.849 21:36:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:25.849 21:36:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:25.849 21:36:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:25.849 21:36:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:25.849 21:36:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:25.849 21:36:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.849 21:36:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.849 21:36:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.109 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:26.109 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:26.109 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:26.109 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.370 00:11:26.370 real 0m19.335s 00:11:26.370 user 0m46.174s 00:11:26.370 sys 0m5.456s 00:11:26.370 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.371 ************************************ 00:11:26.371 END TEST nvmf_example 00:11:26.371 ************************************ 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.371 ************************************ 00:11:26.371 START TEST nvmf_filesystem 00:11:26.371 ************************************ 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:26.371 * Looking for test storage... 00:11:26.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:26.371 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:26.372 #define SPDK_CONFIG_H 00:11:26.372 #define SPDK_CONFIG_APPS 1 00:11:26.372 #define SPDK_CONFIG_ARCH native 00:11:26.372 #undef SPDK_CONFIG_ASAN 00:11:26.372 #undef SPDK_CONFIG_AVAHI 00:11:26.372 #undef SPDK_CONFIG_CET 00:11:26.372 #define SPDK_CONFIG_COVERAGE 1 00:11:26.372 #define SPDK_CONFIG_CROSS_PREFIX 00:11:26.372 #undef SPDK_CONFIG_CRYPTO 00:11:26.372 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:26.372 #undef SPDK_CONFIG_CUSTOMOCF 00:11:26.372 #undef SPDK_CONFIG_DAOS 00:11:26.372 #define SPDK_CONFIG_DAOS_DIR 00:11:26.372 #define SPDK_CONFIG_DEBUG 1 00:11:26.372 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:26.372 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:26.372 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:26.372 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:26.372 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:26.372 #undef SPDK_CONFIG_DPDK_UADK 00:11:26.372 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:26.372 #define SPDK_CONFIG_EXAMPLES 1 00:11:26.372 #undef SPDK_CONFIG_FC 00:11:26.372 #define SPDK_CONFIG_FC_PATH 00:11:26.372 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:26.372 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:26.372 #undef SPDK_CONFIG_FUSE 00:11:26.372 #undef SPDK_CONFIG_FUZZER 00:11:26.372 #define SPDK_CONFIG_FUZZER_LIB 00:11:26.372 #undef SPDK_CONFIG_GOLANG 00:11:26.372 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:26.372 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:26.372 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:26.372 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:26.372 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:26.372 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:26.372 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:26.372 #define SPDK_CONFIG_IDXD 1 00:11:26.372 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:26.372 #undef SPDK_CONFIG_IPSEC_MB 00:11:26.372 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:26.372 #define SPDK_CONFIG_ISAL 1 00:11:26.372 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:26.372 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:26.372 #define SPDK_CONFIG_LIBDIR 00:11:26.372 #undef SPDK_CONFIG_LTO 00:11:26.372 #define SPDK_CONFIG_MAX_LCORES 128 00:11:26.372 #define SPDK_CONFIG_NVME_CUSE 1 00:11:26.372 #undef SPDK_CONFIG_OCF 00:11:26.372 #define SPDK_CONFIG_OCF_PATH 00:11:26.372 #define SPDK_CONFIG_OPENSSL_PATH 00:11:26.372 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:26.372 #define SPDK_CONFIG_PGO_DIR 00:11:26.372 #undef SPDK_CONFIG_PGO_USE 00:11:26.372 #define SPDK_CONFIG_PREFIX /usr/local 00:11:26.372 #undef SPDK_CONFIG_RAID5F 00:11:26.372 #undef SPDK_CONFIG_RBD 00:11:26.372 #define SPDK_CONFIG_RDMA 1 00:11:26.372 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:26.372 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:26.372 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:26.372 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:26.372 #define SPDK_CONFIG_SHARED 1 00:11:26.372 #undef SPDK_CONFIG_SMA 00:11:26.372 #define SPDK_CONFIG_TESTS 1 00:11:26.372 #undef SPDK_CONFIG_TSAN 00:11:26.372 #define SPDK_CONFIG_UBLK 1 00:11:26.372 #define SPDK_CONFIG_UBSAN 1 00:11:26.372 #undef SPDK_CONFIG_UNIT_TESTS 00:11:26.372 #undef SPDK_CONFIG_URING 00:11:26.372 #define SPDK_CONFIG_URING_PATH 00:11:26.372 #undef SPDK_CONFIG_URING_ZNS 00:11:26.372 #undef SPDK_CONFIG_USDT 00:11:26.372 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:26.372 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:26.372 #define SPDK_CONFIG_VFIO_USER 1 00:11:26.372 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:26.372 #define SPDK_CONFIG_VHOST 1 00:11:26.372 #define SPDK_CONFIG_VIRTIO 1 00:11:26.372 #undef SPDK_CONFIG_VTUNE 00:11:26.372 #define SPDK_CONFIG_VTUNE_DIR 00:11:26.372 #define SPDK_CONFIG_WERROR 1 00:11:26.372 #define SPDK_CONFIG_WPDK_DIR 00:11:26.372 #undef SPDK_CONFIG_XNVME 00:11:26.372 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:26.372 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:26.373 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:11:26.374 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2976378 ]] 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2976378 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.pNgLpP 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.pNgLpP/tests/target /tmp/spdk.pNgLpP 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:11:26.375 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=185183186944 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974283264 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10791096320 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97924960256 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987141632 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=62181376 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39171829760 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194857472 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=23027712 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97984532480 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987141632 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=2609152 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:11:26.635 * Looking for test storage... 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=185183186944 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=13005688832 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.635 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.636 21:36:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:31.919 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:31.919 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:31.919 Found net devices under 0000:86:00.0: cvl_0_0 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.919 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:31.920 Found net devices under 0000:86:00.1: cvl_0_1 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:31.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:11:31.920 00:11:31.920 --- 10.0.0.2 ping statistics --- 00:11:31.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.920 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:11:31.920 00:11:31.920 --- 10.0.0.1 ping statistics --- 00:11:31.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.920 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.920 ************************************ 00:11:31.920 START TEST nvmf_filesystem_no_in_capsule 00:11:31.920 ************************************ 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2979266 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2979266 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2979266 ']' 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.920 21:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.920 [2024-07-24 21:36:39.459929] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:11:31.920 [2024-07-24 21:36:39.459970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.920 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.920 [2024-07-24 21:36:39.517365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.920 [2024-07-24 21:36:39.598006] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.920 [2024-07-24 21:36:39.598044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.920 [2024-07-24 21:36:39.598051] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.920 [2024-07-24 21:36:39.598057] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.920 [2024-07-24 21:36:39.598062] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.920 [2024-07-24 21:36:39.598115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.920 [2024-07-24 21:36:39.598209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.920 [2024-07-24 21:36:39.598295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.920 [2024-07-24 21:36:39.598296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.181 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:32.181 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:11:32.181 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:32.181 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:32.181 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.441 [2024-07-24 21:36:40.323513] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.441 Malloc1 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.441 [2024-07-24 21:36:40.471105] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.441 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:11:32.441 { 00:11:32.441 "name": "Malloc1", 00:11:32.441 "aliases": [ 00:11:32.441 "b199b344-764b-484d-9a55-521b4b9dc303" 00:11:32.441 ], 00:11:32.441 "product_name": "Malloc disk", 00:11:32.441 "block_size": 512, 00:11:32.441 "num_blocks": 1048576, 00:11:32.441 "uuid": "b199b344-764b-484d-9a55-521b4b9dc303", 00:11:32.441 "assigned_rate_limits": { 00:11:32.441 "rw_ios_per_sec": 0, 00:11:32.441 "rw_mbytes_per_sec": 0, 00:11:32.441 "r_mbytes_per_sec": 0, 00:11:32.441 "w_mbytes_per_sec": 0 00:11:32.441 }, 00:11:32.441 "claimed": true, 00:11:32.441 "claim_type": "exclusive_write", 00:11:32.441 "zoned": false, 00:11:32.442 "supported_io_types": { 00:11:32.442 "read": true, 00:11:32.442 "write": true, 00:11:32.442 "unmap": true, 00:11:32.442 "flush": true, 00:11:32.442 "reset": true, 00:11:32.442 "nvme_admin": false, 00:11:32.442 "nvme_io": false, 00:11:32.442 "nvme_io_md": false, 00:11:32.442 "write_zeroes": true, 00:11:32.442 "zcopy": true, 00:11:32.442 "get_zone_info": false, 00:11:32.442 "zone_management": false, 00:11:32.442 "zone_append": false, 00:11:32.442 "compare": false, 00:11:32.442 "compare_and_write": false, 00:11:32.442 "abort": true, 00:11:32.442 "seek_hole": false, 00:11:32.442 "seek_data": false, 00:11:32.442 "copy": true, 00:11:32.442 "nvme_iov_md": false 00:11:32.442 }, 00:11:32.442 "memory_domains": [ 00:11:32.442 { 00:11:32.442 "dma_device_id": "system", 00:11:32.442 "dma_device_type": 1 00:11:32.442 }, 00:11:32.442 { 00:11:32.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.442 "dma_device_type": 2 00:11:32.442 } 00:11:32.442 ], 00:11:32.442 "driver_specific": {} 00:11:32.442 } 00:11:32.442 ]' 00:11:32.442 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:11:32.442 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:11:32.442 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:11:32.702 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:11:32.702 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:11:32.702 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:11:32.702 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:32.702 21:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.639 21:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.639 21:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:11:33.639 21:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.639 21:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:33.639 21:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:36.178 21:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:36.178 21:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.116 ************************************ 00:11:37.116 START TEST filesystem_ext4 00:11:37.116 ************************************ 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:11:37.116 21:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:37.116 mke2fs 1.46.5 (30-Dec-2021) 00:11:37.116 Discarding device blocks: 0/522240 done 00:11:37.116 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:37.116 Filesystem UUID: 26e4d1a2-fc2f-4abb-bb75-4c254b436128 00:11:37.116 Superblock backups stored on blocks: 00:11:37.116 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:37.116 00:11:37.116 Allocating group tables: 0/64 done 00:11:37.116 Writing inode tables: 0/64 done 00:11:37.376 Creating journal (8192 blocks): done 00:11:38.466 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:11:38.466 00:11:38.466 21:36:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:38.466 21:36:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:39.405 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:39.405 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:39.405 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:39.405 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:39.405 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:39.405 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:39.405 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2979266 00:11:39.405 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:39.405 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:39.405 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:39.406 00:11:39.406 real 0m2.218s 00:11:39.406 user 0m0.027s 00:11:39.406 sys 0m0.039s 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:39.406 ************************************ 00:11:39.406 END TEST filesystem_ext4 00:11:39.406 ************************************ 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.406 ************************************ 00:11:39.406 START TEST filesystem_btrfs 00:11:39.406 ************************************ 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:39.406 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:39.976 btrfs-progs v6.6.2 00:11:39.976 See https://btrfs.readthedocs.io for more information. 00:11:39.976 00:11:39.976 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:39.976 NOTE: several default settings have changed in version 5.15, please make sure 00:11:39.976 this does not affect your deployments: 00:11:39.976 - DUP for metadata (-m dup) 00:11:39.976 - enabled no-holes (-O no-holes) 00:11:39.976 - enabled free-space-tree (-R free-space-tree) 00:11:39.976 00:11:39.976 Label: (null) 00:11:39.976 UUID: b566b2cb-62a8-4000-9aa0-917c20047a6e 00:11:39.976 Node size: 16384 00:11:39.976 Sector size: 4096 00:11:39.976 Filesystem size: 510.00MiB 00:11:39.976 Block group profiles: 00:11:39.976 Data: single 8.00MiB 00:11:39.976 Metadata: DUP 32.00MiB 00:11:39.976 System: DUP 8.00MiB 00:11:39.976 SSD detected: yes 00:11:39.976 Zoned device: no 00:11:39.976 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:39.976 Runtime features: free-space-tree 00:11:39.976 Checksum: crc32c 00:11:39.976 Number of devices: 1 00:11:39.976 Devices: 00:11:39.976 ID SIZE PATH 00:11:39.976 1 510.00MiB /dev/nvme0n1p1 00:11:39.976 00:11:39.976 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:39.976 21:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:40.236 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:40.236 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:40.236 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:40.236 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:40.236 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:40.236 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:40.236 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2979266 00:11:40.236 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:40.236 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:40.236 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:40.236 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:40.236 00:11:40.236 real 0m0.978s 00:11:40.236 user 0m0.021s 00:11:40.236 sys 0m0.057s 00:11:40.236 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.236 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:40.236 ************************************ 00:11:40.236 END TEST filesystem_btrfs 00:11:40.236 ************************************ 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.496 ************************************ 00:11:40.496 START TEST filesystem_xfs 00:11:40.496 ************************************ 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:40.496 21:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:40.496 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:40.496 = sectsz=512 attr=2, projid32bit=1 00:11:40.496 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:40.496 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:40.496 data = bsize=4096 blocks=130560, imaxpct=25 00:11:40.496 = sunit=0 swidth=0 blks 00:11:40.496 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:40.496 log =internal log bsize=4096 blocks=16384, version=2 00:11:40.496 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:40.496 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:41.435 Discarding blocks...Done. 00:11:41.435 21:36:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:41.435 21:36:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2979266 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:43.377 00:11:43.377 real 0m2.725s 00:11:43.377 user 0m0.023s 00:11:43.377 sys 0m0.049s 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:43.377 ************************************ 00:11:43.377 END TEST filesystem_xfs 00:11:43.377 ************************************ 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:43.377 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2979266 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2979266 ']' 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2979266 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2979266 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2979266' 00:11:43.637 killing process with pid 2979266 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2979266 00:11:43.637 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2979266 00:11:43.899 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:43.899 00:11:43.899 real 0m12.539s 00:11:43.899 user 0m49.217s 00:11:43.899 sys 0m1.108s 00:11:43.899 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.899 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.899 ************************************ 00:11:43.899 END TEST nvmf_filesystem_no_in_capsule 00:11:43.899 ************************************ 00:11:43.899 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:43.899 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:43.899 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.899 21:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.899 ************************************ 00:11:43.899 START TEST nvmf_filesystem_in_capsule 00:11:43.899 ************************************ 00:11:43.899 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:11:43.899 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:43.899 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:43.899 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:43.899 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:43.899 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.899 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2981554 00:11:43.899 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2981554 00:11:43.900 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2981554 ']' 00:11:43.900 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.900 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.900 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.900 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.900 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.900 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.160 [2024-07-24 21:36:52.060191] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:11:44.160 [2024-07-24 21:36:52.060232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.160 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.160 [2024-07-24 21:36:52.115905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.160 [2024-07-24 21:36:52.196333] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.160 [2024-07-24 21:36:52.196370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.160 [2024-07-24 21:36:52.196377] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.160 [2024-07-24 21:36:52.196384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.160 [2024-07-24 21:36:52.196389] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.160 [2024-07-24 21:36:52.196425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.160 [2024-07-24 21:36:52.196443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.160 [2024-07-24 21:36:52.196530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.160 [2024-07-24 21:36:52.196532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.098 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.098 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:11:45.098 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:45.098 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:45.098 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.098 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.098 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:45.098 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:45.098 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.098 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.098 [2024-07-24 21:36:52.921538] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.098 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.098 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:45.098 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.098 21:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.098 Malloc1 00:11:45.098 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.098 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:45.098 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.098 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.099 [2024-07-24 21:36:53.067768] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:11:45.099 { 00:11:45.099 "name": "Malloc1", 00:11:45.099 "aliases": [ 00:11:45.099 "248bc52e-7013-496e-9719-eaa425afd067" 00:11:45.099 ], 00:11:45.099 "product_name": "Malloc disk", 00:11:45.099 "block_size": 512, 00:11:45.099 "num_blocks": 1048576, 00:11:45.099 "uuid": "248bc52e-7013-496e-9719-eaa425afd067", 00:11:45.099 "assigned_rate_limits": { 00:11:45.099 "rw_ios_per_sec": 0, 00:11:45.099 "rw_mbytes_per_sec": 0, 00:11:45.099 "r_mbytes_per_sec": 0, 00:11:45.099 "w_mbytes_per_sec": 0 00:11:45.099 }, 00:11:45.099 "claimed": true, 00:11:45.099 "claim_type": "exclusive_write", 00:11:45.099 "zoned": false, 00:11:45.099 "supported_io_types": { 00:11:45.099 "read": true, 00:11:45.099 "write": true, 00:11:45.099 "unmap": true, 00:11:45.099 "flush": true, 00:11:45.099 "reset": true, 00:11:45.099 "nvme_admin": false, 00:11:45.099 "nvme_io": false, 00:11:45.099 "nvme_io_md": false, 00:11:45.099 "write_zeroes": true, 00:11:45.099 "zcopy": true, 00:11:45.099 "get_zone_info": false, 00:11:45.099 "zone_management": false, 00:11:45.099 "zone_append": false, 00:11:45.099 "compare": false, 00:11:45.099 "compare_and_write": false, 00:11:45.099 "abort": true, 00:11:45.099 "seek_hole": false, 00:11:45.099 "seek_data": false, 00:11:45.099 "copy": true, 00:11:45.099 "nvme_iov_md": false 00:11:45.099 }, 00:11:45.099 "memory_domains": [ 00:11:45.099 { 00:11:45.099 "dma_device_id": "system", 00:11:45.099 "dma_device_type": 1 00:11:45.099 }, 00:11:45.099 { 00:11:45.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.099 "dma_device_type": 2 00:11:45.099 } 00:11:45.099 ], 00:11:45.099 "driver_specific": {} 00:11:45.099 } 00:11:45.099 ]' 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:45.099 21:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.484 21:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:46.484 21:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:11:46.484 21:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.484 21:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:46.484 21:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:48.390 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:48.650 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:48.909 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:50.288 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:50.288 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:50.288 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:50.288 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.288 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.288 ************************************ 00:11:50.288 START TEST filesystem_in_capsule_ext4 00:11:50.288 ************************************ 00:11:50.288 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:50.288 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:50.288 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:50.288 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:50.288 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:11:50.289 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:50.289 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:11:50.289 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:11:50.289 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:11:50.289 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:11:50.289 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:50.289 mke2fs 1.46.5 (30-Dec-2021) 00:11:50.289 Discarding device blocks: 0/522240 done 00:11:50.289 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:50.289 Filesystem UUID: e53970ba-7e04-4fb8-b494-7a173fe171ce 00:11:50.289 Superblock backups stored on blocks: 00:11:50.289 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:50.289 00:11:50.289 Allocating group tables: 0/64 done 00:11:50.289 Writing inode tables: 0/64 done 00:11:50.289 Creating journal (8192 blocks): done 00:11:50.289 Writing superblocks and filesystem accounting information: 0/64 done 00:11:50.289 00:11:50.289 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:50.289 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2981554 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:50.548 00:11:50.548 real 0m0.548s 00:11:50.548 user 0m0.020s 00:11:50.548 sys 0m0.043s 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:50.548 ************************************ 00:11:50.548 END TEST filesystem_in_capsule_ext4 00:11:50.548 ************************************ 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.548 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.807 ************************************ 00:11:50.807 START TEST filesystem_in_capsule_btrfs 00:11:50.807 ************************************ 00:11:50.807 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:50.807 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:50.807 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:50.807 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:50.807 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:50.807 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:50.807 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:50.807 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:50.807 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:50.807 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:50.807 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:51.067 btrfs-progs v6.6.2 00:11:51.067 See https://btrfs.readthedocs.io for more information. 00:11:51.067 00:11:51.067 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:51.067 NOTE: several default settings have changed in version 5.15, please make sure 00:11:51.067 this does not affect your deployments: 00:11:51.067 - DUP for metadata (-m dup) 00:11:51.067 - enabled no-holes (-O no-holes) 00:11:51.067 - enabled free-space-tree (-R free-space-tree) 00:11:51.067 00:11:51.067 Label: (null) 00:11:51.067 UUID: 27f54855-bb96-4f7a-9e6f-18d7efac20ac 00:11:51.067 Node size: 16384 00:11:51.067 Sector size: 4096 00:11:51.067 Filesystem size: 510.00MiB 00:11:51.067 Block group profiles: 00:11:51.067 Data: single 8.00MiB 00:11:51.067 Metadata: DUP 32.00MiB 00:11:51.067 System: DUP 8.00MiB 00:11:51.067 SSD detected: yes 00:11:51.067 Zoned device: no 00:11:51.067 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:51.067 Runtime features: free-space-tree 00:11:51.067 Checksum: crc32c 00:11:51.067 Number of devices: 1 00:11:51.067 Devices: 00:11:51.067 ID SIZE PATH 00:11:51.067 1 510.00MiB /dev/nvme0n1p1 00:11:51.067 00:11:51.067 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:51.067 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:52.010 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:52.010 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:52.010 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:52.010 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:52.010 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:52.010 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:52.010 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2981554 00:11:52.010 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:52.010 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:52.010 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:52.010 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:52.010 00:11:52.010 real 0m1.430s 00:11:52.010 user 0m0.021s 00:11:52.010 sys 0m0.064s 00:11:52.010 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.010 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:52.010 ************************************ 00:11:52.010 END TEST filesystem_in_capsule_btrfs 00:11:52.010 ************************************ 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.271 ************************************ 00:11:52.271 START TEST filesystem_in_capsule_xfs 00:11:52.271 ************************************ 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:52.271 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:52.271 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:52.271 = sectsz=512 attr=2, projid32bit=1 00:11:52.271 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:52.271 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:52.271 data = bsize=4096 blocks=130560, imaxpct=25 00:11:52.271 = sunit=0 swidth=0 blks 00:11:52.271 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:52.271 log =internal log bsize=4096 blocks=16384, version=2 00:11:52.271 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:52.271 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:53.210 Discarding blocks...Done. 00:11:53.210 21:37:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:53.210 21:37:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:56.503 21:37:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:56.503 21:37:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:56.503 21:37:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:56.503 21:37:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:56.503 21:37:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:56.503 21:37:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:56.503 21:37:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2981554 00:11:56.503 21:37:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:56.503 21:37:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:56.503 21:37:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:56.503 21:37:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.503 00:11:56.503 real 0m3.797s 00:11:56.503 user 0m0.025s 00:11:56.503 sys 0m0.048s 00:11:56.503 21:37:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:56.503 21:37:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:56.503 ************************************ 00:11:56.503 END TEST filesystem_in_capsule_xfs 00:11:56.503 ************************************ 00:11:56.503 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:56.503 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2981554 00:11:56.763 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2981554 ']' 00:11:56.764 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2981554 00:11:56.764 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:56.764 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:56.764 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2981554 00:11:56.764 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:56.764 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:56.764 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2981554' 00:11:56.764 killing process with pid 2981554 00:11:56.764 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2981554 00:11:56.764 21:37:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2981554 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:57.334 00:11:57.334 real 0m13.174s 00:11:57.334 user 0m51.722s 00:11:57.334 sys 0m1.155s 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.334 ************************************ 00:11:57.334 END TEST nvmf_filesystem_in_capsule 00:11:57.334 ************************************ 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:57.334 rmmod nvme_tcp 00:11:57.334 rmmod nvme_fabrics 00:11:57.334 rmmod nvme_keyring 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.334 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.245 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:59.245 00:11:59.245 real 0m33.043s 00:11:59.245 user 1m42.370s 00:11:59.245 sys 0m6.068s 00:11:59.245 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:59.505 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:59.505 ************************************ 00:11:59.505 END TEST nvmf_filesystem 00:11:59.505 ************************************ 00:11:59.505 21:37:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:59.505 21:37:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:59.505 21:37:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.505 21:37:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.505 ************************************ 00:11:59.505 START TEST nvmf_target_discovery 00:11:59.505 ************************************ 00:11:59.505 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:59.505 * Looking for test storage... 00:11:59.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.505 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.505 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:59.506 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:04.830 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.830 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:04.831 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:04.831 Found net devices under 0000:86:00.0: cvl_0_0 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:04.831 Found net devices under 0000:86:00.1: cvl_0_1 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:04.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:12:04.831 00:12:04.831 --- 10.0.0.2 ping statistics --- 00:12:04.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.831 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.416 ms 00:12:04.831 00:12:04.831 --- 10.0.0.1 ping statistics --- 00:12:04.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.831 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2987350 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2987350 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2987350 ']' 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:04.831 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.831 [2024-07-24 21:37:12.907263] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:12:04.831 [2024-07-24 21:37:12.907308] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.831 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.092 [2024-07-24 21:37:12.966227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.092 [2024-07-24 21:37:13.048125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.092 [2024-07-24 21:37:13.048161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.092 [2024-07-24 21:37:13.048168] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.092 [2024-07-24 21:37:13.048177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.092 [2024-07-24 21:37:13.048182] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.092 [2024-07-24 21:37:13.048216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.092 [2024-07-24 21:37:13.048313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.092 [2024-07-24 21:37:13.048329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.092 [2024-07-24 21:37:13.048330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.660 [2024-07-24 21:37:13.755227] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.660 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.924 Null1 00:12:05.924 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.924 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:05.924 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.924 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.924 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.924 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:05.924 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.924 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.924 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.924 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 [2024-07-24 21:37:13.800693] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 Null2 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 Null3 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 Null4 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:05.925 00:12:05.925 Discovery Log Number of Records 6, Generation counter 6 00:12:05.925 =====Discovery Log Entry 0====== 00:12:05.925 trtype: tcp 00:12:05.925 adrfam: ipv4 00:12:05.925 subtype: current discovery subsystem 00:12:05.925 treq: not required 00:12:05.925 portid: 0 00:12:05.925 trsvcid: 4420 00:12:05.925 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:05.925 traddr: 10.0.0.2 00:12:05.925 eflags: explicit discovery connections, duplicate discovery information 00:12:05.925 sectype: none 00:12:05.925 =====Discovery Log Entry 1====== 00:12:05.925 trtype: tcp 00:12:05.925 adrfam: ipv4 00:12:05.925 subtype: nvme subsystem 00:12:05.925 treq: not required 00:12:05.925 portid: 0 00:12:05.925 trsvcid: 4420 00:12:05.925 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:05.925 traddr: 10.0.0.2 00:12:05.925 eflags: none 00:12:05.925 sectype: none 00:12:05.925 =====Discovery Log Entry 2====== 00:12:05.925 trtype: tcp 00:12:05.925 adrfam: ipv4 00:12:05.925 subtype: nvme subsystem 00:12:05.925 treq: not required 00:12:05.925 portid: 0 00:12:05.925 trsvcid: 4420 00:12:05.925 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:05.925 traddr: 10.0.0.2 00:12:05.925 eflags: none 00:12:05.925 sectype: none 00:12:05.925 =====Discovery Log Entry 3====== 00:12:05.925 trtype: tcp 00:12:05.925 adrfam: ipv4 00:12:05.925 subtype: nvme subsystem 00:12:05.925 treq: not required 00:12:05.925 portid: 0 00:12:05.925 trsvcid: 4420 00:12:05.925 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:05.925 traddr: 10.0.0.2 00:12:05.925 eflags: none 00:12:05.925 sectype: none 00:12:05.925 =====Discovery Log Entry 4====== 00:12:05.925 trtype: tcp 00:12:05.925 adrfam: ipv4 00:12:05.925 subtype: nvme subsystem 00:12:05.925 treq: not required 00:12:05.925 portid: 0 00:12:05.925 trsvcid: 4420 00:12:05.925 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:05.925 traddr: 10.0.0.2 00:12:05.925 eflags: none 00:12:05.925 sectype: none 00:12:05.925 =====Discovery Log Entry 5====== 00:12:05.925 trtype: tcp 00:12:05.925 adrfam: ipv4 00:12:05.925 subtype: discovery subsystem referral 00:12:05.925 treq: not required 00:12:05.925 portid: 0 00:12:05.925 trsvcid: 4430 00:12:05.925 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:05.925 traddr: 10.0.0.2 00:12:05.925 eflags: none 00:12:05.925 sectype: none 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:05.925 Perform nvmf subsystem discovery via RPC 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.925 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 [ 00:12:05.926 { 00:12:05.926 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:05.926 "subtype": "Discovery", 00:12:05.926 "listen_addresses": [ 00:12:05.926 { 00:12:05.926 "trtype": "TCP", 00:12:05.926 "adrfam": "IPv4", 00:12:05.926 "traddr": "10.0.0.2", 00:12:05.926 "trsvcid": "4420" 00:12:05.926 } 00:12:05.926 ], 00:12:05.926 "allow_any_host": true, 00:12:05.926 "hosts": [] 00:12:05.926 }, 00:12:05.926 { 00:12:05.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.926 "subtype": "NVMe", 00:12:05.926 "listen_addresses": [ 00:12:05.926 { 00:12:05.926 "trtype": "TCP", 00:12:05.926 "adrfam": "IPv4", 00:12:05.926 "traddr": "10.0.0.2", 00:12:05.926 "trsvcid": "4420" 00:12:05.926 } 00:12:05.926 ], 00:12:05.926 "allow_any_host": true, 00:12:05.926 "hosts": [], 00:12:05.926 "serial_number": "SPDK00000000000001", 00:12:05.926 "model_number": "SPDK bdev Controller", 00:12:05.926 "max_namespaces": 32, 00:12:05.926 "min_cntlid": 1, 00:12:05.926 "max_cntlid": 65519, 00:12:05.926 "namespaces": [ 00:12:05.926 { 00:12:05.926 "nsid": 1, 00:12:05.926 "bdev_name": "Null1", 00:12:05.926 "name": "Null1", 00:12:05.926 "nguid": "20F5144EA51947A49C7B1B87E45C94C4", 00:12:05.926 "uuid": "20f5144e-a519-47a4-9c7b-1b87e45c94c4" 00:12:05.926 } 00:12:05.926 ] 00:12:05.926 }, 00:12:05.926 { 00:12:05.926 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:05.926 "subtype": "NVMe", 00:12:05.926 "listen_addresses": [ 00:12:05.926 { 00:12:05.926 "trtype": "TCP", 00:12:05.926 "adrfam": "IPv4", 00:12:05.926 "traddr": "10.0.0.2", 00:12:05.926 "trsvcid": "4420" 00:12:05.926 } 00:12:05.926 ], 00:12:05.926 "allow_any_host": true, 00:12:05.926 "hosts": [], 00:12:05.926 "serial_number": "SPDK00000000000002", 00:12:05.926 "model_number": "SPDK bdev Controller", 00:12:05.926 "max_namespaces": 32, 00:12:05.926 "min_cntlid": 1, 00:12:05.926 "max_cntlid": 65519, 00:12:05.926 "namespaces": [ 00:12:05.926 { 00:12:05.926 "nsid": 1, 00:12:05.926 "bdev_name": "Null2", 00:12:05.926 "name": "Null2", 00:12:05.926 "nguid": "DC6C6CBA56B84F47AA440425DB4F4FC3", 00:12:05.926 "uuid": "dc6c6cba-56b8-4f47-aa44-0425db4f4fc3" 00:12:05.926 } 00:12:05.926 ] 00:12:05.926 }, 00:12:05.926 { 00:12:05.926 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:05.926 "subtype": "NVMe", 00:12:05.926 "listen_addresses": [ 00:12:05.926 { 00:12:05.926 "trtype": "TCP", 00:12:05.926 "adrfam": "IPv4", 00:12:05.926 "traddr": "10.0.0.2", 00:12:05.926 "trsvcid": "4420" 00:12:05.926 } 00:12:05.926 ], 00:12:05.926 "allow_any_host": true, 00:12:05.926 "hosts": [], 00:12:05.926 "serial_number": "SPDK00000000000003", 00:12:05.926 "model_number": "SPDK bdev Controller", 00:12:05.926 "max_namespaces": 32, 00:12:05.926 "min_cntlid": 1, 00:12:05.926 "max_cntlid": 65519, 00:12:05.926 "namespaces": [ 00:12:05.926 { 00:12:05.926 "nsid": 1, 00:12:05.926 "bdev_name": "Null3", 00:12:05.926 "name": "Null3", 00:12:05.926 "nguid": "AD34495BC7614B6D896FCDE5E61D2B32", 00:12:05.926 "uuid": "ad34495b-c761-4b6d-896f-cde5e61d2b32" 00:12:05.926 } 00:12:05.926 ] 00:12:05.926 }, 00:12:05.926 { 00:12:05.926 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:05.926 "subtype": "NVMe", 00:12:05.926 "listen_addresses": [ 00:12:05.926 { 00:12:05.926 "trtype": "TCP", 00:12:05.926 "adrfam": "IPv4", 00:12:05.926 "traddr": "10.0.0.2", 00:12:05.926 "trsvcid": "4420" 00:12:05.926 } 00:12:05.926 ], 00:12:05.926 "allow_any_host": true, 00:12:05.926 "hosts": [], 00:12:05.926 "serial_number": "SPDK00000000000004", 00:12:05.926 "model_number": "SPDK bdev Controller", 00:12:05.926 "max_namespaces": 32, 00:12:05.926 "min_cntlid": 1, 00:12:05.926 "max_cntlid": 65519, 00:12:05.926 "namespaces": [ 00:12:05.926 { 00:12:05.926 "nsid": 1, 00:12:05.926 "bdev_name": "Null4", 00:12:05.926 "name": "Null4", 00:12:05.926 "nguid": "1E42995E1CE14A238F21DC935749B697", 00:12:05.926 "uuid": "1e42995e-1ce1-4a23-8f21-dc935749b697" 00:12:05.926 } 00:12:05.926 ] 00:12:05.926 } 00:12:05.926 ] 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.926 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.187 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:06.188 rmmod nvme_tcp 00:12:06.188 rmmod nvme_fabrics 00:12:06.188 rmmod nvme_keyring 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2987350 ']' 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2987350 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2987350 ']' 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2987350 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2987350 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2987350' 00:12:06.188 killing process with pid 2987350 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2987350 00:12:06.188 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2987350 00:12:06.447 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:06.447 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:06.447 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:06.447 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.447 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:06.447 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.447 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.447 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.357 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:08.618 00:12:08.618 real 0m9.050s 00:12:08.618 user 0m7.031s 00:12:08.618 sys 0m4.393s 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.618 ************************************ 00:12:08.618 END TEST nvmf_target_discovery 00:12:08.618 ************************************ 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.618 ************************************ 00:12:08.618 START TEST nvmf_referrals 00:12:08.618 ************************************ 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:08.618 * Looking for test storage... 00:12:08.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:08.618 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:08.619 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:13.904 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:13.904 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:13.905 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:13.905 Found net devices under 0000:86:00.0: cvl_0_0 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:13.905 Found net devices under 0000:86:00.1: cvl_0_1 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:13.905 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:14.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:12:14.166 00:12:14.166 --- 10.0.0.2 ping statistics --- 00:12:14.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.166 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.403 ms 00:12:14.166 00:12:14.166 --- 10.0.0.1 ping statistics --- 00:12:14.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.166 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2990916 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2990916 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2990916 ']' 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:14.166 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.166 [2024-07-24 21:37:22.117801] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:12:14.166 [2024-07-24 21:37:22.117850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.166 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.166 [2024-07-24 21:37:22.176734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.166 [2024-07-24 21:37:22.258729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.166 [2024-07-24 21:37:22.258765] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.166 [2024-07-24 21:37:22.258772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.166 [2024-07-24 21:37:22.258778] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.166 [2024-07-24 21:37:22.258783] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.166 [2024-07-24 21:37:22.258821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.166 [2024-07-24 21:37:22.258849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.166 [2024-07-24 21:37:22.258933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.166 [2024-07-24 21:37:22.258935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.108 [2024-07-24 21:37:22.980376] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.108 [2024-07-24 21:37:22.993721] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.108 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.108 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.376 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:15.636 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:15.636 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:15.636 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:15.636 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:15.636 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:15.636 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.636 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.895 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:16.154 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:16.154 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:16.154 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:16.154 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:16.154 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:16.154 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.154 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:16.154 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:16.154 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:16.154 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:16.154 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:16.154 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.154 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.412 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.671 rmmod nvme_tcp 00:12:16.671 rmmod nvme_fabrics 00:12:16.671 rmmod nvme_keyring 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2990916 ']' 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2990916 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2990916 ']' 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2990916 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2990916 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2990916' 00:12:16.671 killing process with pid 2990916 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2990916 00:12:16.671 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2990916 00:12:16.930 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.930 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.930 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.930 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.930 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.930 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.930 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.930 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.836 21:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:18.836 00:12:18.836 real 0m10.351s 00:12:18.836 user 0m12.545s 00:12:18.836 sys 0m4.614s 00:12:18.836 21:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.836 21:37:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.836 ************************************ 00:12:18.836 END TEST nvmf_referrals 00:12:18.836 ************************************ 00:12:18.836 21:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:18.836 21:37:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:18.836 21:37:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.836 21:37:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.095 ************************************ 00:12:19.095 START TEST nvmf_connect_disconnect 00:12:19.095 ************************************ 00:12:19.095 21:37:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:19.095 * Looking for test storage... 00:12:19.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.095 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.096 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:24.410 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:24.410 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:24.410 Found net devices under 0000:86:00.0: cvl_0_0 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:24.410 Found net devices under 0000:86:00.1: cvl_0_1 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:24.410 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:24.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:12:24.411 00:12:24.411 --- 10.0.0.2 ping statistics --- 00:12:24.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.411 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:24.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:12:24.411 00:12:24.411 --- 10.0.0.1 ping statistics --- 00:12:24.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.411 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2994973 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2994973 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2994973 ']' 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.411 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:24.411 [2024-07-24 21:37:32.401302] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:12:24.411 [2024-07-24 21:37:32.401345] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.411 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.411 [2024-07-24 21:37:32.457894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.671 [2024-07-24 21:37:32.539456] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.671 [2024-07-24 21:37:32.539491] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.671 [2024-07-24 21:37:32.539497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.671 [2024-07-24 21:37:32.539504] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.671 [2024-07-24 21:37:32.539509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.671 [2024-07-24 21:37:32.539560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.671 [2024-07-24 21:37:32.539653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.671 [2024-07-24 21:37:32.539736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.671 [2024-07-24 21:37:32.539737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.240 [2024-07-24 21:37:33.250535] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.240 [2024-07-24 21:37:33.302121] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:25.240 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:28.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:41.717 rmmod nvme_tcp 00:12:41.717 rmmod nvme_fabrics 00:12:41.717 rmmod nvme_keyring 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2994973 ']' 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2994973 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2994973 ']' 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2994973 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2994973 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2994973' 00:12:41.717 killing process with pid 2994973 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2994973 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2994973 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.717 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:44.258 00:12:44.258 real 0m24.861s 00:12:44.258 user 1m9.879s 00:12:44.258 sys 0m4.979s 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:44.258 ************************************ 00:12:44.258 END TEST nvmf_connect_disconnect 00:12:44.258 ************************************ 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:44.258 ************************************ 00:12:44.258 START TEST nvmf_multitarget 00:12:44.258 ************************************ 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:44.258 * Looking for test storage... 00:12:44.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.258 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:44.259 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:49.544 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:49.544 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:49.545 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:49.545 Found net devices under 0000:86:00.0: cvl_0_0 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:49.545 Found net devices under 0000:86:00.1: cvl_0_1 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:49.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:12:49.545 00:12:49.545 --- 10.0.0.2 ping statistics --- 00:12:49.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.545 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:12:49.545 00:12:49.545 --- 10.0.0.1 ping statistics --- 00:12:49.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.545 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3001290 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3001290 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3001290 ']' 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:49.545 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:49.545 [2024-07-24 21:37:57.503847] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:12:49.545 [2024-07-24 21:37:57.503892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.545 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.545 [2024-07-24 21:37:57.562798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.545 [2024-07-24 21:37:57.635738] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.545 [2024-07-24 21:37:57.635780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.545 [2024-07-24 21:37:57.635787] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.545 [2024-07-24 21:37:57.635793] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.545 [2024-07-24 21:37:57.635797] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.545 [2024-07-24 21:37:57.635892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.545 [2024-07-24 21:37:57.636008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.546 [2024-07-24 21:37:57.636097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.546 [2024-07-24 21:37:57.636099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.525 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:50.525 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:50.525 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:50.525 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:50.525 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:50.525 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.525 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:50.525 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:50.525 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:50.525 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:50.525 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:50.525 "nvmf_tgt_1" 00:12:50.525 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:50.785 "nvmf_tgt_2" 00:12:50.785 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:50.785 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:50.785 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:50.785 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:50.785 true 00:12:50.786 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:51.046 true 00:12:51.046 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:51.046 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:51.046 rmmod nvme_tcp 00:12:51.046 rmmod nvme_fabrics 00:12:51.046 rmmod nvme_keyring 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3001290 ']' 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3001290 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3001290 ']' 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3001290 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:51.046 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3001290 00:12:51.306 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:51.306 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:51.306 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3001290' 00:12:51.306 killing process with pid 3001290 00:12:51.306 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3001290 00:12:51.306 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3001290 00:12:51.306 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:51.306 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:51.306 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:51.306 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.306 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:51.306 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.306 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.306 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:53.848 00:12:53.848 real 0m9.553s 00:12:53.848 user 0m9.169s 00:12:53.848 sys 0m4.534s 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:53.848 ************************************ 00:12:53.848 END TEST nvmf_multitarget 00:12:53.848 ************************************ 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:53.848 ************************************ 00:12:53.848 START TEST nvmf_rpc 00:12:53.848 ************************************ 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:53.848 * Looking for test storage... 00:12:53.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:53.848 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:53.849 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.129 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:59.130 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:59.130 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:59.130 Found net devices under 0000:86:00.0: cvl_0_0 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:59.130 Found net devices under 0000:86:00.1: cvl_0_1 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:59.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:12:59.130 00:12:59.130 --- 10.0.0.2 ping statistics --- 00:12:59.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.130 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:12:59.130 00:12:59.130 --- 10.0.0.1 ping statistics --- 00:12:59.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.130 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.130 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3004920 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3004920 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3004920 ']' 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.130 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.130 [2024-07-24 21:38:07.064476] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:12:59.130 [2024-07-24 21:38:07.064524] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.130 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.130 [2024-07-24 21:38:07.119240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.131 [2024-07-24 21:38:07.200712] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.131 [2024-07-24 21:38:07.200752] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.131 [2024-07-24 21:38:07.200760] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.131 [2024-07-24 21:38:07.200766] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.131 [2024-07-24 21:38:07.200772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.131 [2024-07-24 21:38:07.200812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.131 [2024-07-24 21:38:07.200912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.131 [2024-07-24 21:38:07.201006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.131 [2024-07-24 21:38:07.201007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.069 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.069 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:00.070 "tick_rate": 2300000000, 00:13:00.070 "poll_groups": [ 00:13:00.070 { 00:13:00.070 "name": "nvmf_tgt_poll_group_000", 00:13:00.070 "admin_qpairs": 0, 00:13:00.070 "io_qpairs": 0, 00:13:00.070 "current_admin_qpairs": 0, 00:13:00.070 "current_io_qpairs": 0, 00:13:00.070 "pending_bdev_io": 0, 00:13:00.070 "completed_nvme_io": 0, 00:13:00.070 "transports": [] 00:13:00.070 }, 00:13:00.070 { 00:13:00.070 "name": "nvmf_tgt_poll_group_001", 00:13:00.070 "admin_qpairs": 0, 00:13:00.070 "io_qpairs": 0, 00:13:00.070 "current_admin_qpairs": 0, 00:13:00.070 "current_io_qpairs": 0, 00:13:00.070 "pending_bdev_io": 0, 00:13:00.070 "completed_nvme_io": 0, 00:13:00.070 "transports": [] 00:13:00.070 }, 00:13:00.070 { 00:13:00.070 "name": "nvmf_tgt_poll_group_002", 00:13:00.070 "admin_qpairs": 0, 00:13:00.070 "io_qpairs": 0, 00:13:00.070 "current_admin_qpairs": 0, 00:13:00.070 "current_io_qpairs": 0, 00:13:00.070 "pending_bdev_io": 0, 00:13:00.070 "completed_nvme_io": 0, 00:13:00.070 "transports": [] 00:13:00.070 }, 00:13:00.070 { 00:13:00.070 "name": "nvmf_tgt_poll_group_003", 00:13:00.070 "admin_qpairs": 0, 00:13:00.070 "io_qpairs": 0, 00:13:00.070 "current_admin_qpairs": 0, 00:13:00.070 "current_io_qpairs": 0, 00:13:00.070 "pending_bdev_io": 0, 00:13:00.070 "completed_nvme_io": 0, 00:13:00.070 "transports": [] 00:13:00.070 } 00:13:00.070 ] 00:13:00.070 }' 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:00.070 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.070 [2024-07-24 21:38:08.033864] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:00.070 "tick_rate": 2300000000, 00:13:00.070 "poll_groups": [ 00:13:00.070 { 00:13:00.070 "name": "nvmf_tgt_poll_group_000", 00:13:00.070 "admin_qpairs": 0, 00:13:00.070 "io_qpairs": 0, 00:13:00.070 "current_admin_qpairs": 0, 00:13:00.070 "current_io_qpairs": 0, 00:13:00.070 "pending_bdev_io": 0, 00:13:00.070 "completed_nvme_io": 0, 00:13:00.070 "transports": [ 00:13:00.070 { 00:13:00.070 "trtype": "TCP" 00:13:00.070 } 00:13:00.070 ] 00:13:00.070 }, 00:13:00.070 { 00:13:00.070 "name": "nvmf_tgt_poll_group_001", 00:13:00.070 "admin_qpairs": 0, 00:13:00.070 "io_qpairs": 0, 00:13:00.070 "current_admin_qpairs": 0, 00:13:00.070 "current_io_qpairs": 0, 00:13:00.070 "pending_bdev_io": 0, 00:13:00.070 "completed_nvme_io": 0, 00:13:00.070 "transports": [ 00:13:00.070 { 00:13:00.070 "trtype": "TCP" 00:13:00.070 } 00:13:00.070 ] 00:13:00.070 }, 00:13:00.070 { 00:13:00.070 "name": "nvmf_tgt_poll_group_002", 00:13:00.070 "admin_qpairs": 0, 00:13:00.070 "io_qpairs": 0, 00:13:00.070 "current_admin_qpairs": 0, 00:13:00.070 "current_io_qpairs": 0, 00:13:00.070 "pending_bdev_io": 0, 00:13:00.070 "completed_nvme_io": 0, 00:13:00.070 "transports": [ 00:13:00.070 { 00:13:00.070 "trtype": "TCP" 00:13:00.070 } 00:13:00.070 ] 00:13:00.070 }, 00:13:00.070 { 00:13:00.070 "name": "nvmf_tgt_poll_group_003", 00:13:00.070 "admin_qpairs": 0, 00:13:00.070 "io_qpairs": 0, 00:13:00.070 "current_admin_qpairs": 0, 00:13:00.070 "current_io_qpairs": 0, 00:13:00.070 "pending_bdev_io": 0, 00:13:00.070 "completed_nvme_io": 0, 00:13:00.070 "transports": [ 00:13:00.070 { 00:13:00.070 "trtype": "TCP" 00:13:00.070 } 00:13:00.070 ] 00:13:00.070 } 00:13:00.070 ] 00:13:00.070 }' 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.070 Malloc1 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.070 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.330 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.330 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:00.330 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.330 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.330 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.330 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.330 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.330 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.330 [2024-07-24 21:38:08.205793] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.330 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.330 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:00.330 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:00.330 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:00.330 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:00.330 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:00.331 [2024-07-24 21:38:08.230514] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:13:00.331 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:00.331 could not add new controller: failed to write to nvme-fabrics device 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.331 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.270 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.270 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:13:01.270 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.270 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:01.270 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:03.810 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.811 [2024-07-24 21:38:11.473906] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:13:03.811 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:03.811 could not add new controller: failed to write to nvme-fabrics device 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.811 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.750 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.750 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:13:04.750 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.750 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:04.750 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:13:06.658 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:06.658 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:06.658 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.658 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:06.658 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.658 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:13:06.658 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.918 [2024-07-24 21:38:14.844100] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.918 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.858 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.858 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:13:07.858 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.858 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:07.858 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:13:10.430 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:10.430 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:10.430 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.430 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:10.430 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.430 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:13:10.430 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.430 [2024-07-24 21:38:18.133251] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.430 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.400 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.400 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:13:11.400 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.400 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:11.400 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.412 [2024-07-24 21:38:21.387676] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.412 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.795 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:14.795 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:13:14.795 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.795 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:14.795 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.707 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.708 [2024-07-24 21:38:24.695421] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.708 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.090 21:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.090 21:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:13:18.090 21:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.090 21:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:18.090 21:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.001 [2024-07-24 21:38:27.975963] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.001 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.002 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:20.002 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.002 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.002 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.002 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.002 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.002 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.002 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.002 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.378 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.378 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:13:21.378 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.378 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:21.378 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.288 [2024-07-24 21:38:31.220668] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.288 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 [2024-07-24 21:38:31.268768] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 [2024-07-24 21:38:31.320956] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 [2024-07-24 21:38:31.369106] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.289 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.547 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.548 [2024-07-24 21:38:31.417275] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:23.548 "tick_rate": 2300000000, 00:13:23.548 "poll_groups": [ 00:13:23.548 { 00:13:23.548 "name": "nvmf_tgt_poll_group_000", 00:13:23.548 "admin_qpairs": 2, 00:13:23.548 "io_qpairs": 168, 00:13:23.548 "current_admin_qpairs": 0, 00:13:23.548 "current_io_qpairs": 0, 00:13:23.548 "pending_bdev_io": 0, 00:13:23.548 "completed_nvme_io": 267, 00:13:23.548 "transports": [ 00:13:23.548 { 00:13:23.548 "trtype": "TCP" 00:13:23.548 } 00:13:23.548 ] 00:13:23.548 }, 00:13:23.548 { 00:13:23.548 "name": "nvmf_tgt_poll_group_001", 00:13:23.548 "admin_qpairs": 2, 00:13:23.548 "io_qpairs": 168, 00:13:23.548 "current_admin_qpairs": 0, 00:13:23.548 "current_io_qpairs": 0, 00:13:23.548 "pending_bdev_io": 0, 00:13:23.548 "completed_nvme_io": 268, 00:13:23.548 "transports": [ 00:13:23.548 { 00:13:23.548 "trtype": "TCP" 00:13:23.548 } 00:13:23.548 ] 00:13:23.548 }, 00:13:23.548 { 00:13:23.548 "name": "nvmf_tgt_poll_group_002", 00:13:23.548 "admin_qpairs": 1, 00:13:23.548 "io_qpairs": 168, 00:13:23.548 "current_admin_qpairs": 0, 00:13:23.548 "current_io_qpairs": 0, 00:13:23.548 "pending_bdev_io": 0, 00:13:23.548 "completed_nvme_io": 218, 00:13:23.548 "transports": [ 00:13:23.548 { 00:13:23.548 "trtype": "TCP" 00:13:23.548 } 00:13:23.548 ] 00:13:23.548 }, 00:13:23.548 { 00:13:23.548 "name": "nvmf_tgt_poll_group_003", 00:13:23.548 "admin_qpairs": 2, 00:13:23.548 "io_qpairs": 168, 00:13:23.548 "current_admin_qpairs": 0, 00:13:23.548 "current_io_qpairs": 0, 00:13:23.548 "pending_bdev_io": 0, 00:13:23.548 "completed_nvme_io": 269, 00:13:23.548 "transports": [ 00:13:23.548 { 00:13:23.548 "trtype": "TCP" 00:13:23.548 } 00:13:23.548 ] 00:13:23.548 } 00:13:23.548 ] 00:13:23.548 }' 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.548 rmmod nvme_tcp 00:13:23.548 rmmod nvme_fabrics 00:13:23.548 rmmod nvme_keyring 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3004920 ']' 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3004920 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3004920 ']' 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3004920 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.548 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3004920 00:13:23.808 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:23.808 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:23.808 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3004920' 00:13:23.808 killing process with pid 3004920 00:13:23.808 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3004920 00:13:23.808 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3004920 00:13:23.808 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:23.808 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:23.808 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:23.808 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.808 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.808 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.808 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.808 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.345 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:26.345 00:13:26.345 real 0m32.420s 00:13:26.345 user 1m39.993s 00:13:26.345 sys 0m5.586s 00:13:26.345 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:26.345 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.345 ************************************ 00:13:26.345 END TEST nvmf_rpc 00:13:26.345 ************************************ 00:13:26.345 21:38:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:26.345 21:38:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:26.345 21:38:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.345 21:38:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.345 ************************************ 00:13:26.345 START TEST nvmf_invalid 00:13:26.345 ************************************ 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:26.345 * Looking for test storage... 00:13:26.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.345 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.346 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:31.624 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:31.624 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:31.624 Found net devices under 0000:86:00.0: cvl_0_0 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:31.624 Found net devices under 0000:86:00.1: cvl_0_1 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.624 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:31.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:13:31.625 00:13:31.625 --- 10.0.0.2 ping statistics --- 00:13:31.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.625 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:13:31.625 00:13:31.625 --- 10.0.0.1 ping statistics --- 00:13:31.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.625 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3012503 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3012503 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3012503 ']' 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.625 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:31.625 [2024-07-24 21:38:39.223522] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:13:31.625 [2024-07-24 21:38:39.223567] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.625 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.625 [2024-07-24 21:38:39.282370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.625 [2024-07-24 21:38:39.355908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.625 [2024-07-24 21:38:39.355948] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.625 [2024-07-24 21:38:39.355955] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.625 [2024-07-24 21:38:39.355960] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.625 [2024-07-24 21:38:39.355965] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.625 [2024-07-24 21:38:39.356066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.625 [2024-07-24 21:38:39.356122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.625 [2024-07-24 21:38:39.356239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.625 [2024-07-24 21:38:39.356240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.195 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.195 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:32.195 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:32.195 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:32.195 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:32.195 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.195 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:32.195 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14371 00:13:32.195 [2024-07-24 21:38:40.232071] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:32.195 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:32.195 { 00:13:32.195 "nqn": "nqn.2016-06.io.spdk:cnode14371", 00:13:32.195 "tgt_name": "foobar", 00:13:32.195 "method": "nvmf_create_subsystem", 00:13:32.195 "req_id": 1 00:13:32.195 } 00:13:32.195 Got JSON-RPC error response 00:13:32.195 response: 00:13:32.195 { 00:13:32.195 "code": -32603, 00:13:32.195 "message": "Unable to find target foobar" 00:13:32.195 }' 00:13:32.195 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:32.195 { 00:13:32.195 "nqn": "nqn.2016-06.io.spdk:cnode14371", 00:13:32.195 "tgt_name": "foobar", 00:13:32.195 "method": "nvmf_create_subsystem", 00:13:32.195 "req_id": 1 00:13:32.195 } 00:13:32.195 Got JSON-RPC error response 00:13:32.195 response: 00:13:32.195 { 00:13:32.195 "code": -32603, 00:13:32.195 "message": "Unable to find target foobar" 00:13:32.195 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:32.195 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:32.195 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15915 00:13:32.455 [2024-07-24 21:38:40.416737] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15915: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:32.455 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:32.455 { 00:13:32.455 "nqn": "nqn.2016-06.io.spdk:cnode15915", 00:13:32.455 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:32.455 "method": "nvmf_create_subsystem", 00:13:32.455 "req_id": 1 00:13:32.455 } 00:13:32.455 Got JSON-RPC error response 00:13:32.455 response: 00:13:32.455 { 00:13:32.455 "code": -32602, 00:13:32.455 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:32.455 }' 00:13:32.455 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:32.455 { 00:13:32.455 "nqn": "nqn.2016-06.io.spdk:cnode15915", 00:13:32.455 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:32.455 "method": "nvmf_create_subsystem", 00:13:32.455 "req_id": 1 00:13:32.455 } 00:13:32.455 Got JSON-RPC error response 00:13:32.455 response: 00:13:32.455 { 00:13:32.455 "code": -32602, 00:13:32.455 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:32.455 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:32.455 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:32.455 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11790 00:13:32.716 [2024-07-24 21:38:40.605292] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11790: invalid model number 'SPDK_Controller' 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:32.716 { 00:13:32.716 "nqn": "nqn.2016-06.io.spdk:cnode11790", 00:13:32.716 "model_number": "SPDK_Controller\u001f", 00:13:32.716 "method": "nvmf_create_subsystem", 00:13:32.716 "req_id": 1 00:13:32.716 } 00:13:32.716 Got JSON-RPC error response 00:13:32.716 response: 00:13:32.716 { 00:13:32.716 "code": -32602, 00:13:32.716 "message": "Invalid MN SPDK_Controller\u001f" 00:13:32.716 }' 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:32.716 { 00:13:32.716 "nqn": "nqn.2016-06.io.spdk:cnode11790", 00:13:32.716 "model_number": "SPDK_Controller\u001f", 00:13:32.716 "method": "nvmf_create_subsystem", 00:13:32.716 "req_id": 1 00:13:32.716 } 00:13:32.716 Got JSON-RPC error response 00:13:32.716 response: 00:13:32.716 { 00:13:32.716 "code": -32602, 00:13:32.716 "message": "Invalid MN SPDK_Controller\u001f" 00:13:32.716 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:32.716 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ P == \- ]] 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'PV4THOj/p)=d%DJY"T16t' 00:13:32.717 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'PV4THOj/p)=d%DJY"T16t' nqn.2016-06.io.spdk:cnode15811 00:13:32.978 [2024-07-24 21:38:40.926371] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15811: invalid serial number 'PV4THOj/p)=d%DJY"T16t' 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:32.978 { 00:13:32.978 "nqn": "nqn.2016-06.io.spdk:cnode15811", 00:13:32.978 "serial_number": "PV4THOj/p)=d%DJY\"T16t", 00:13:32.978 "method": "nvmf_create_subsystem", 00:13:32.978 "req_id": 1 00:13:32.978 } 00:13:32.978 Got JSON-RPC error response 00:13:32.978 response: 00:13:32.978 { 00:13:32.978 "code": -32602, 00:13:32.978 "message": "Invalid SN PV4THOj/p)=d%DJY\"T16t" 00:13:32.978 }' 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:32.978 { 00:13:32.978 "nqn": "nqn.2016-06.io.spdk:cnode15811", 00:13:32.978 "serial_number": "PV4THOj/p)=d%DJY\"T16t", 00:13:32.978 "method": "nvmf_create_subsystem", 00:13:32.978 "req_id": 1 00:13:32.978 } 00:13:32.978 Got JSON-RPC error response 00:13:32.978 response: 00:13:32.978 { 00:13:32.978 "code": -32602, 00:13:32.978 "message": "Invalid SN PV4THOj/p)=d%DJY\"T16t" 00:13:32.978 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.978 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.978 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:32.978 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:32.978 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:32.978 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.978 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.978 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:32.978 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:32.978 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:32.978 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.978 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.979 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:33.239 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:33.239 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:33.239 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.239 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.239 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:33.239 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:33.239 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:33.239 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.239 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.239 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:33.239 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '$:OGNlRb`C:SCs_#$-^MQCrfMm673\Xq~Oyr}DNRl' 00:13:33.240 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '$:OGNlRb`C:SCs_#$-^MQCrfMm673\Xq~Oyr}DNRl' nqn.2016-06.io.spdk:cnode19881 00:13:33.500 [2024-07-24 21:38:41.387937] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19881: invalid model number '$:OGNlRb`C:SCs_#$-^MQCrfMm673\Xq~Oyr}DNRl' 00:13:33.500 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:33.500 { 00:13:33.500 "nqn": "nqn.2016-06.io.spdk:cnode19881", 00:13:33.500 "model_number": "$:OGNlRb`C:SCs_#$-^MQCrfMm673\\Xq~Oyr}DNRl", 00:13:33.500 "method": "nvmf_create_subsystem", 00:13:33.500 "req_id": 1 00:13:33.500 } 00:13:33.500 Got JSON-RPC error response 00:13:33.500 response: 00:13:33.500 { 00:13:33.500 "code": -32602, 00:13:33.500 "message": "Invalid MN $:OGNlRb`C:SCs_#$-^MQCrfMm673\\Xq~Oyr}DNRl" 00:13:33.500 }' 00:13:33.500 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:33.500 { 00:13:33.500 "nqn": "nqn.2016-06.io.spdk:cnode19881", 00:13:33.500 "model_number": "$:OGNlRb`C:SCs_#$-^MQCrfMm673\\Xq~Oyr}DNRl", 00:13:33.500 "method": "nvmf_create_subsystem", 00:13:33.500 "req_id": 1 00:13:33.500 } 00:13:33.500 Got JSON-RPC error response 00:13:33.500 response: 00:13:33.500 { 00:13:33.500 "code": -32602, 00:13:33.500 "message": "Invalid MN $:OGNlRb`C:SCs_#$-^MQCrfMm673\\Xq~Oyr}DNRl" 00:13:33.500 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:33.500 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:33.500 [2024-07-24 21:38:41.576632] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.500 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:33.760 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:33.760 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:33.760 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:33.760 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:33.760 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:34.020 [2024-07-24 21:38:41.959259] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:34.020 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:34.020 { 00:13:34.020 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:34.020 "listen_address": { 00:13:34.020 "trtype": "tcp", 00:13:34.020 "traddr": "", 00:13:34.020 "trsvcid": "4421" 00:13:34.020 }, 00:13:34.020 "method": "nvmf_subsystem_remove_listener", 00:13:34.020 "req_id": 1 00:13:34.020 } 00:13:34.020 Got JSON-RPC error response 00:13:34.020 response: 00:13:34.020 { 00:13:34.020 "code": -32602, 00:13:34.020 "message": "Invalid parameters" 00:13:34.020 }' 00:13:34.020 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:34.020 { 00:13:34.020 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:34.020 "listen_address": { 00:13:34.020 "trtype": "tcp", 00:13:34.020 "traddr": "", 00:13:34.020 "trsvcid": "4421" 00:13:34.020 }, 00:13:34.020 "method": "nvmf_subsystem_remove_listener", 00:13:34.020 "req_id": 1 00:13:34.020 } 00:13:34.020 Got JSON-RPC error response 00:13:34.020 response: 00:13:34.020 { 00:13:34.020 "code": -32602, 00:13:34.020 "message": "Invalid parameters" 00:13:34.020 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:34.020 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16317 -i 0 00:13:34.280 [2024-07-24 21:38:42.139833] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16317: invalid cntlid range [0-65519] 00:13:34.280 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:34.280 { 00:13:34.280 "nqn": "nqn.2016-06.io.spdk:cnode16317", 00:13:34.280 "min_cntlid": 0, 00:13:34.280 "method": "nvmf_create_subsystem", 00:13:34.280 "req_id": 1 00:13:34.280 } 00:13:34.280 Got JSON-RPC error response 00:13:34.280 response: 00:13:34.280 { 00:13:34.280 "code": -32602, 00:13:34.280 "message": "Invalid cntlid range [0-65519]" 00:13:34.280 }' 00:13:34.280 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:34.280 { 00:13:34.280 "nqn": "nqn.2016-06.io.spdk:cnode16317", 00:13:34.280 "min_cntlid": 0, 00:13:34.280 "method": "nvmf_create_subsystem", 00:13:34.280 "req_id": 1 00:13:34.280 } 00:13:34.280 Got JSON-RPC error response 00:13:34.280 response: 00:13:34.280 { 00:13:34.280 "code": -32602, 00:13:34.280 "message": "Invalid cntlid range [0-65519]" 00:13:34.280 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:34.280 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14559 -i 65520 00:13:34.280 [2024-07-24 21:38:42.328483] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14559: invalid cntlid range [65520-65519] 00:13:34.280 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:34.280 { 00:13:34.280 "nqn": "nqn.2016-06.io.spdk:cnode14559", 00:13:34.280 "min_cntlid": 65520, 00:13:34.280 "method": "nvmf_create_subsystem", 00:13:34.280 "req_id": 1 00:13:34.280 } 00:13:34.280 Got JSON-RPC error response 00:13:34.280 response: 00:13:34.280 { 00:13:34.280 "code": -32602, 00:13:34.280 "message": "Invalid cntlid range [65520-65519]" 00:13:34.280 }' 00:13:34.280 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:34.280 { 00:13:34.280 "nqn": "nqn.2016-06.io.spdk:cnode14559", 00:13:34.280 "min_cntlid": 65520, 00:13:34.280 "method": "nvmf_create_subsystem", 00:13:34.280 "req_id": 1 00:13:34.280 } 00:13:34.280 Got JSON-RPC error response 00:13:34.280 response: 00:13:34.280 { 00:13:34.280 "code": -32602, 00:13:34.280 "message": "Invalid cntlid range [65520-65519]" 00:13:34.280 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:34.280 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12082 -I 0 00:13:34.539 [2024-07-24 21:38:42.525196] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12082: invalid cntlid range [1-0] 00:13:34.539 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:34.539 { 00:13:34.539 "nqn": "nqn.2016-06.io.spdk:cnode12082", 00:13:34.539 "max_cntlid": 0, 00:13:34.539 "method": "nvmf_create_subsystem", 00:13:34.539 "req_id": 1 00:13:34.539 } 00:13:34.539 Got JSON-RPC error response 00:13:34.539 response: 00:13:34.539 { 00:13:34.539 "code": -32602, 00:13:34.539 "message": "Invalid cntlid range [1-0]" 00:13:34.539 }' 00:13:34.539 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:34.539 { 00:13:34.539 "nqn": "nqn.2016-06.io.spdk:cnode12082", 00:13:34.539 "max_cntlid": 0, 00:13:34.539 "method": "nvmf_create_subsystem", 00:13:34.539 "req_id": 1 00:13:34.539 } 00:13:34.539 Got JSON-RPC error response 00:13:34.539 response: 00:13:34.539 { 00:13:34.539 "code": -32602, 00:13:34.539 "message": "Invalid cntlid range [1-0]" 00:13:34.539 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:34.539 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15140 -I 65520 00:13:34.799 [2024-07-24 21:38:42.717798] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15140: invalid cntlid range [1-65520] 00:13:34.799 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:34.799 { 00:13:34.799 "nqn": "nqn.2016-06.io.spdk:cnode15140", 00:13:34.799 "max_cntlid": 65520, 00:13:34.799 "method": "nvmf_create_subsystem", 00:13:34.799 "req_id": 1 00:13:34.799 } 00:13:34.799 Got JSON-RPC error response 00:13:34.799 response: 00:13:34.799 { 00:13:34.799 "code": -32602, 00:13:34.799 "message": "Invalid cntlid range [1-65520]" 00:13:34.799 }' 00:13:34.799 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:34.799 { 00:13:34.799 "nqn": "nqn.2016-06.io.spdk:cnode15140", 00:13:34.799 "max_cntlid": 65520, 00:13:34.799 "method": "nvmf_create_subsystem", 00:13:34.799 "req_id": 1 00:13:34.799 } 00:13:34.799 Got JSON-RPC error response 00:13:34.799 response: 00:13:34.799 { 00:13:34.799 "code": -32602, 00:13:34.799 "message": "Invalid cntlid range [1-65520]" 00:13:34.799 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:34.799 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25013 -i 6 -I 5 00:13:34.799 [2024-07-24 21:38:42.894428] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25013: invalid cntlid range [6-5] 00:13:35.059 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:35.059 { 00:13:35.059 "nqn": "nqn.2016-06.io.spdk:cnode25013", 00:13:35.059 "min_cntlid": 6, 00:13:35.059 "max_cntlid": 5, 00:13:35.059 "method": "nvmf_create_subsystem", 00:13:35.059 "req_id": 1 00:13:35.059 } 00:13:35.059 Got JSON-RPC error response 00:13:35.059 response: 00:13:35.059 { 00:13:35.059 "code": -32602, 00:13:35.059 "message": "Invalid cntlid range [6-5]" 00:13:35.059 }' 00:13:35.059 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:35.059 { 00:13:35.059 "nqn": "nqn.2016-06.io.spdk:cnode25013", 00:13:35.059 "min_cntlid": 6, 00:13:35.059 "max_cntlid": 5, 00:13:35.059 "method": "nvmf_create_subsystem", 00:13:35.059 "req_id": 1 00:13:35.059 } 00:13:35.059 Got JSON-RPC error response 00:13:35.059 response: 00:13:35.059 { 00:13:35.059 "code": -32602, 00:13:35.059 "message": "Invalid cntlid range [6-5]" 00:13:35.059 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:35.059 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:35.059 { 00:13:35.059 "name": "foobar", 00:13:35.059 "method": "nvmf_delete_target", 00:13:35.059 "req_id": 1 00:13:35.059 } 00:13:35.059 Got JSON-RPC error response 00:13:35.059 response: 00:13:35.059 { 00:13:35.059 "code": -32602, 00:13:35.059 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:35.059 }' 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:35.059 { 00:13:35.059 "name": "foobar", 00:13:35.059 "method": "nvmf_delete_target", 00:13:35.059 "req_id": 1 00:13:35.059 } 00:13:35.059 Got JSON-RPC error response 00:13:35.059 response: 00:13:35.059 { 00:13:35.059 "code": -32602, 00:13:35.059 "message": "The specified target doesn't exist, cannot delete it." 00:13:35.059 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:35.059 rmmod nvme_tcp 00:13:35.059 rmmod nvme_fabrics 00:13:35.059 rmmod nvme_keyring 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3012503 ']' 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3012503 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3012503 ']' 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3012503 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3012503 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3012503' 00:13:35.059 killing process with pid 3012503 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3012503 00:13:35.059 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3012503 00:13:35.319 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:35.319 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:35.319 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:35.319 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:35.319 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:35.319 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.319 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.319 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:37.859 00:13:37.859 real 0m11.354s 00:13:37.859 user 0m19.369s 00:13:37.859 sys 0m4.819s 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:37.859 ************************************ 00:13:37.859 END TEST nvmf_invalid 00:13:37.859 ************************************ 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:37.859 ************************************ 00:13:37.859 START TEST nvmf_connect_stress 00:13:37.859 ************************************ 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:37.859 * Looking for test storage... 00:13:37.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.859 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:37.860 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.144 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.144 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.144 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.144 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.144 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.144 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.144 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:43.145 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:43.145 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:43.145 Found net devices under 0000:86:00.0: cvl_0_0 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:43.145 Found net devices under 0000:86:00.1: cvl_0_1 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.145 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.145 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.145 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.145 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:13:43.145 00:13:43.145 --- 10.0.0.2 ping statistics --- 00:13:43.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.145 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:13:43.145 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:13:43.145 00:13:43.145 --- 10.0.0.1 ping statistics --- 00:13:43.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.145 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:13:43.145 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.145 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:43.145 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.145 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.145 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.145 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.145 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.145 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.145 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.146 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:43.146 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.146 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:43.146 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.146 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3016833 00:13:43.146 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3016833 00:13:43.146 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:43.146 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3016833 ']' 00:13:43.146 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.146 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.146 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.146 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.146 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.146 [2024-07-24 21:38:51.147523] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:13:43.146 [2024-07-24 21:38:51.147573] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.146 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.146 [2024-07-24 21:38:51.206494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:43.406 [2024-07-24 21:38:51.280422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.406 [2024-07-24 21:38:51.280462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.406 [2024-07-24 21:38:51.280468] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.406 [2024-07-24 21:38:51.280474] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.406 [2024-07-24 21:38:51.280479] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.406 [2024-07-24 21:38:51.280599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.406 [2024-07-24 21:38:51.280692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.406 [2024-07-24 21:38:51.280694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.976 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.976 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:13:43.976 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:43.976 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:43.976 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.976 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.976 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:43.976 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.976 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.976 [2024-07-24 21:38:52.000158] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.976 [2024-07-24 21:38:52.037176] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.976 NULL1 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3016912 00:13:43.976 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.977 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.977 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.236 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.236 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.236 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.236 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.237 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.497 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.497 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:44.497 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.497 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.497 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.757 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.757 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:44.757 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.757 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.757 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.017 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.017 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:45.017 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.017 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.017 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.587 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.587 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:45.587 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.587 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.587 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.847 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.847 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:45.847 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.847 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.847 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.107 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.107 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:46.107 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.107 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.107 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.368 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.368 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:46.368 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.368 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.368 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.628 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.628 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:46.628 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.628 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.628 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.197 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.197 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:47.197 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.197 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.197 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.457 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.457 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:47.457 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.457 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.457 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.717 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.717 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:47.717 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.717 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.717 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.977 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.977 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:47.977 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.977 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.977 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.283 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.283 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:48.283 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.283 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.283 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.565 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.565 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:48.565 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.565 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.565 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.137 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.137 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:49.137 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.137 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.137 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.397 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.397 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:49.397 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.397 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.397 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.657 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.657 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:49.657 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.657 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.657 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.916 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.917 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:49.917 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.917 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.917 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.176 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.176 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:50.176 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.176 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.176 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.746 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.746 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:50.746 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.746 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.746 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.006 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.006 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:51.006 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.006 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.006 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.266 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.266 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:51.266 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.266 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.266 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.526 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.526 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:51.526 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.526 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.526 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.785 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.785 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:51.785 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.785 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.785 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.353 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.353 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:52.353 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.353 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.353 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.612 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.612 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:52.612 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.612 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.612 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.871 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.871 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:52.871 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.871 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.871 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.130 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.130 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:53.130 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.130 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.130 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.698 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.698 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:53.698 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.698 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.698 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.958 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.958 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:53.958 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.958 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.958 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.218 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.218 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:54.218 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.218 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.218 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.218 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3016912 00:13:54.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3016912) - No such process 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3016912 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:54.478 rmmod nvme_tcp 00:13:54.478 rmmod nvme_fabrics 00:13:54.478 rmmod nvme_keyring 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3016833 ']' 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3016833 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3016833 ']' 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3016833 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3016833 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:54.478 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3016833' 00:13:54.478 killing process with pid 3016833 00:13:54.738 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3016833 00:13:54.738 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3016833 00:13:54.738 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.738 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:54.738 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:54.738 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.738 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.738 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.738 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.738 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.280 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:57.280 00:13:57.280 real 0m19.398s 00:13:57.280 user 0m41.887s 00:13:57.280 sys 0m8.051s 00:13:57.280 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:57.280 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.280 ************************************ 00:13:57.280 END TEST nvmf_connect_stress 00:13:57.280 ************************************ 00:13:57.280 21:39:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:57.280 21:39:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:57.280 21:39:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.280 21:39:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:57.280 ************************************ 00:13:57.280 START TEST nvmf_fused_ordering 00:13:57.280 ************************************ 00:13:57.280 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:57.280 * Looking for test storage... 00:13:57.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:57.280 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.560 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:02.561 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:02.561 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:02.561 Found net devices under 0000:86:00.0: cvl_0_0 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:02.561 Found net devices under 0000:86:00.1: cvl_0_1 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:14:02.561 00:14:02.561 --- 10.0.0.2 ping statistics --- 00:14:02.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.561 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:14:02.561 00:14:02.561 --- 10.0.0.1 ping statistics --- 00:14:02.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.561 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3022528 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3022528 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3022528 ']' 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.561 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.561 [2024-07-24 21:39:10.016546] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:14:02.561 [2024-07-24 21:39:10.016592] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.561 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.562 [2024-07-24 21:39:10.077030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.562 [2024-07-24 21:39:10.155569] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.562 [2024-07-24 21:39:10.155604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.562 [2024-07-24 21:39:10.155610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.562 [2024-07-24 21:39:10.155617] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.562 [2024-07-24 21:39:10.155622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.562 [2024-07-24 21:39:10.155637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.820 [2024-07-24 21:39:10.854415] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.820 [2024-07-24 21:39:10.870539] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.820 NULL1 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.820 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:02.820 [2024-07-24 21:39:10.923157] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:14:02.820 [2024-07-24 21:39:10.923187] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3022820 ] 00:14:03.082 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.023 Attached to nqn.2016-06.io.spdk:cnode1 00:14:04.023 Namespace ID: 1 size: 1GB 00:14:04.023 fused_ordering(0) 00:14:04.023 fused_ordering(1) 00:14:04.023 fused_ordering(2) 00:14:04.023 fused_ordering(3) 00:14:04.023 fused_ordering(4) 00:14:04.023 fused_ordering(5) 00:14:04.023 fused_ordering(6) 00:14:04.023 fused_ordering(7) 00:14:04.023 fused_ordering(8) 00:14:04.023 fused_ordering(9) 00:14:04.023 fused_ordering(10) 00:14:04.023 fused_ordering(11) 00:14:04.023 fused_ordering(12) 00:14:04.023 fused_ordering(13) 00:14:04.023 fused_ordering(14) 00:14:04.023 fused_ordering(15) 00:14:04.023 fused_ordering(16) 00:14:04.023 fused_ordering(17) 00:14:04.023 fused_ordering(18) 00:14:04.023 fused_ordering(19) 00:14:04.023 fused_ordering(20) 00:14:04.023 fused_ordering(21) 00:14:04.023 fused_ordering(22) 00:14:04.023 fused_ordering(23) 00:14:04.023 fused_ordering(24) 00:14:04.023 fused_ordering(25) 00:14:04.023 fused_ordering(26) 00:14:04.023 fused_ordering(27) 00:14:04.023 fused_ordering(28) 00:14:04.023 fused_ordering(29) 00:14:04.023 fused_ordering(30) 00:14:04.023 fused_ordering(31) 00:14:04.023 fused_ordering(32) 00:14:04.023 fused_ordering(33) 00:14:04.023 fused_ordering(34) 00:14:04.023 fused_ordering(35) 00:14:04.023 fused_ordering(36) 00:14:04.023 fused_ordering(37) 00:14:04.023 fused_ordering(38) 00:14:04.023 fused_ordering(39) 00:14:04.023 fused_ordering(40) 00:14:04.023 fused_ordering(41) 00:14:04.024 fused_ordering(42) 00:14:04.024 fused_ordering(43) 00:14:04.024 fused_ordering(44) 00:14:04.024 fused_ordering(45) 00:14:04.024 fused_ordering(46) 00:14:04.024 fused_ordering(47) 00:14:04.024 fused_ordering(48) 00:14:04.024 fused_ordering(49) 00:14:04.024 fused_ordering(50) 00:14:04.024 fused_ordering(51) 00:14:04.024 fused_ordering(52) 00:14:04.024 fused_ordering(53) 00:14:04.024 fused_ordering(54) 00:14:04.024 fused_ordering(55) 00:14:04.024 fused_ordering(56) 00:14:04.024 fused_ordering(57) 00:14:04.024 fused_ordering(58) 00:14:04.024 fused_ordering(59) 00:14:04.024 fused_ordering(60) 00:14:04.024 fused_ordering(61) 00:14:04.024 fused_ordering(62) 00:14:04.024 fused_ordering(63) 00:14:04.024 fused_ordering(64) 00:14:04.024 fused_ordering(65) 00:14:04.024 fused_ordering(66) 00:14:04.024 fused_ordering(67) 00:14:04.024 fused_ordering(68) 00:14:04.024 fused_ordering(69) 00:14:04.024 fused_ordering(70) 00:14:04.024 fused_ordering(71) 00:14:04.024 fused_ordering(72) 00:14:04.024 fused_ordering(73) 00:14:04.024 fused_ordering(74) 00:14:04.024 fused_ordering(75) 00:14:04.024 fused_ordering(76) 00:14:04.024 fused_ordering(77) 00:14:04.024 fused_ordering(78) 00:14:04.024 fused_ordering(79) 00:14:04.024 fused_ordering(80) 00:14:04.024 fused_ordering(81) 00:14:04.024 fused_ordering(82) 00:14:04.024 fused_ordering(83) 00:14:04.024 fused_ordering(84) 00:14:04.024 fused_ordering(85) 00:14:04.024 fused_ordering(86) 00:14:04.024 fused_ordering(87) 00:14:04.024 fused_ordering(88) 00:14:04.024 fused_ordering(89) 00:14:04.024 fused_ordering(90) 00:14:04.024 fused_ordering(91) 00:14:04.024 fused_ordering(92) 00:14:04.024 fused_ordering(93) 00:14:04.024 fused_ordering(94) 00:14:04.024 fused_ordering(95) 00:14:04.024 fused_ordering(96) 00:14:04.024 fused_ordering(97) 00:14:04.024 fused_ordering(98) 00:14:04.024 fused_ordering(99) 00:14:04.024 fused_ordering(100) 00:14:04.024 fused_ordering(101) 00:14:04.024 fused_ordering(102) 00:14:04.024 fused_ordering(103) 00:14:04.024 fused_ordering(104) 00:14:04.024 fused_ordering(105) 00:14:04.024 fused_ordering(106) 00:14:04.024 fused_ordering(107) 00:14:04.024 fused_ordering(108) 00:14:04.024 fused_ordering(109) 00:14:04.024 fused_ordering(110) 00:14:04.024 fused_ordering(111) 00:14:04.024 fused_ordering(112) 00:14:04.024 fused_ordering(113) 00:14:04.024 fused_ordering(114) 00:14:04.024 fused_ordering(115) 00:14:04.024 fused_ordering(116) 00:14:04.024 fused_ordering(117) 00:14:04.024 fused_ordering(118) 00:14:04.024 fused_ordering(119) 00:14:04.024 fused_ordering(120) 00:14:04.024 fused_ordering(121) 00:14:04.024 fused_ordering(122) 00:14:04.024 fused_ordering(123) 00:14:04.024 fused_ordering(124) 00:14:04.024 fused_ordering(125) 00:14:04.024 fused_ordering(126) 00:14:04.024 fused_ordering(127) 00:14:04.024 fused_ordering(128) 00:14:04.024 fused_ordering(129) 00:14:04.024 fused_ordering(130) 00:14:04.024 fused_ordering(131) 00:14:04.024 fused_ordering(132) 00:14:04.024 fused_ordering(133) 00:14:04.024 fused_ordering(134) 00:14:04.024 fused_ordering(135) 00:14:04.024 fused_ordering(136) 00:14:04.024 fused_ordering(137) 00:14:04.024 fused_ordering(138) 00:14:04.024 fused_ordering(139) 00:14:04.024 fused_ordering(140) 00:14:04.024 fused_ordering(141) 00:14:04.024 fused_ordering(142) 00:14:04.024 fused_ordering(143) 00:14:04.024 fused_ordering(144) 00:14:04.024 fused_ordering(145) 00:14:04.024 fused_ordering(146) 00:14:04.024 fused_ordering(147) 00:14:04.024 fused_ordering(148) 00:14:04.024 fused_ordering(149) 00:14:04.024 fused_ordering(150) 00:14:04.024 fused_ordering(151) 00:14:04.024 fused_ordering(152) 00:14:04.024 fused_ordering(153) 00:14:04.024 fused_ordering(154) 00:14:04.024 fused_ordering(155) 00:14:04.024 fused_ordering(156) 00:14:04.024 fused_ordering(157) 00:14:04.024 fused_ordering(158) 00:14:04.024 fused_ordering(159) 00:14:04.024 fused_ordering(160) 00:14:04.024 fused_ordering(161) 00:14:04.024 fused_ordering(162) 00:14:04.024 fused_ordering(163) 00:14:04.024 fused_ordering(164) 00:14:04.024 fused_ordering(165) 00:14:04.024 fused_ordering(166) 00:14:04.024 fused_ordering(167) 00:14:04.024 fused_ordering(168) 00:14:04.024 fused_ordering(169) 00:14:04.024 fused_ordering(170) 00:14:04.024 fused_ordering(171) 00:14:04.024 fused_ordering(172) 00:14:04.024 fused_ordering(173) 00:14:04.024 fused_ordering(174) 00:14:04.024 fused_ordering(175) 00:14:04.024 fused_ordering(176) 00:14:04.024 fused_ordering(177) 00:14:04.024 fused_ordering(178) 00:14:04.024 fused_ordering(179) 00:14:04.024 fused_ordering(180) 00:14:04.024 fused_ordering(181) 00:14:04.024 fused_ordering(182) 00:14:04.024 fused_ordering(183) 00:14:04.024 fused_ordering(184) 00:14:04.024 fused_ordering(185) 00:14:04.024 fused_ordering(186) 00:14:04.024 fused_ordering(187) 00:14:04.024 fused_ordering(188) 00:14:04.024 fused_ordering(189) 00:14:04.024 fused_ordering(190) 00:14:04.024 fused_ordering(191) 00:14:04.024 fused_ordering(192) 00:14:04.024 fused_ordering(193) 00:14:04.024 fused_ordering(194) 00:14:04.024 fused_ordering(195) 00:14:04.024 fused_ordering(196) 00:14:04.024 fused_ordering(197) 00:14:04.024 fused_ordering(198) 00:14:04.024 fused_ordering(199) 00:14:04.024 fused_ordering(200) 00:14:04.024 fused_ordering(201) 00:14:04.024 fused_ordering(202) 00:14:04.024 fused_ordering(203) 00:14:04.024 fused_ordering(204) 00:14:04.024 fused_ordering(205) 00:14:04.591 fused_ordering(206) 00:14:04.591 fused_ordering(207) 00:14:04.591 fused_ordering(208) 00:14:04.591 fused_ordering(209) 00:14:04.591 fused_ordering(210) 00:14:04.591 fused_ordering(211) 00:14:04.591 fused_ordering(212) 00:14:04.591 fused_ordering(213) 00:14:04.591 fused_ordering(214) 00:14:04.591 fused_ordering(215) 00:14:04.591 fused_ordering(216) 00:14:04.591 fused_ordering(217) 00:14:04.591 fused_ordering(218) 00:14:04.591 fused_ordering(219) 00:14:04.591 fused_ordering(220) 00:14:04.591 fused_ordering(221) 00:14:04.591 fused_ordering(222) 00:14:04.591 fused_ordering(223) 00:14:04.591 fused_ordering(224) 00:14:04.591 fused_ordering(225) 00:14:04.591 fused_ordering(226) 00:14:04.591 fused_ordering(227) 00:14:04.591 fused_ordering(228) 00:14:04.591 fused_ordering(229) 00:14:04.591 fused_ordering(230) 00:14:04.591 fused_ordering(231) 00:14:04.591 fused_ordering(232) 00:14:04.591 fused_ordering(233) 00:14:04.591 fused_ordering(234) 00:14:04.591 fused_ordering(235) 00:14:04.591 fused_ordering(236) 00:14:04.591 fused_ordering(237) 00:14:04.591 fused_ordering(238) 00:14:04.591 fused_ordering(239) 00:14:04.591 fused_ordering(240) 00:14:04.591 fused_ordering(241) 00:14:04.591 fused_ordering(242) 00:14:04.591 fused_ordering(243) 00:14:04.591 fused_ordering(244) 00:14:04.591 fused_ordering(245) 00:14:04.591 fused_ordering(246) 00:14:04.591 fused_ordering(247) 00:14:04.591 fused_ordering(248) 00:14:04.591 fused_ordering(249) 00:14:04.591 fused_ordering(250) 00:14:04.591 fused_ordering(251) 00:14:04.591 fused_ordering(252) 00:14:04.591 fused_ordering(253) 00:14:04.591 fused_ordering(254) 00:14:04.591 fused_ordering(255) 00:14:04.591 fused_ordering(256) 00:14:04.591 fused_ordering(257) 00:14:04.591 fused_ordering(258) 00:14:04.591 fused_ordering(259) 00:14:04.591 fused_ordering(260) 00:14:04.591 fused_ordering(261) 00:14:04.591 fused_ordering(262) 00:14:04.591 fused_ordering(263) 00:14:04.591 fused_ordering(264) 00:14:04.591 fused_ordering(265) 00:14:04.591 fused_ordering(266) 00:14:04.591 fused_ordering(267) 00:14:04.591 fused_ordering(268) 00:14:04.591 fused_ordering(269) 00:14:04.591 fused_ordering(270) 00:14:04.591 fused_ordering(271) 00:14:04.591 fused_ordering(272) 00:14:04.591 fused_ordering(273) 00:14:04.591 fused_ordering(274) 00:14:04.591 fused_ordering(275) 00:14:04.591 fused_ordering(276) 00:14:04.591 fused_ordering(277) 00:14:04.591 fused_ordering(278) 00:14:04.591 fused_ordering(279) 00:14:04.591 fused_ordering(280) 00:14:04.591 fused_ordering(281) 00:14:04.591 fused_ordering(282) 00:14:04.591 fused_ordering(283) 00:14:04.591 fused_ordering(284) 00:14:04.591 fused_ordering(285) 00:14:04.591 fused_ordering(286) 00:14:04.591 fused_ordering(287) 00:14:04.591 fused_ordering(288) 00:14:04.591 fused_ordering(289) 00:14:04.591 fused_ordering(290) 00:14:04.591 fused_ordering(291) 00:14:04.592 fused_ordering(292) 00:14:04.592 fused_ordering(293) 00:14:04.592 fused_ordering(294) 00:14:04.592 fused_ordering(295) 00:14:04.592 fused_ordering(296) 00:14:04.592 fused_ordering(297) 00:14:04.592 fused_ordering(298) 00:14:04.592 fused_ordering(299) 00:14:04.592 fused_ordering(300) 00:14:04.592 fused_ordering(301) 00:14:04.592 fused_ordering(302) 00:14:04.592 fused_ordering(303) 00:14:04.592 fused_ordering(304) 00:14:04.592 fused_ordering(305) 00:14:04.592 fused_ordering(306) 00:14:04.592 fused_ordering(307) 00:14:04.592 fused_ordering(308) 00:14:04.592 fused_ordering(309) 00:14:04.592 fused_ordering(310) 00:14:04.592 fused_ordering(311) 00:14:04.592 fused_ordering(312) 00:14:04.592 fused_ordering(313) 00:14:04.592 fused_ordering(314) 00:14:04.592 fused_ordering(315) 00:14:04.592 fused_ordering(316) 00:14:04.592 fused_ordering(317) 00:14:04.592 fused_ordering(318) 00:14:04.592 fused_ordering(319) 00:14:04.592 fused_ordering(320) 00:14:04.592 fused_ordering(321) 00:14:04.592 fused_ordering(322) 00:14:04.592 fused_ordering(323) 00:14:04.592 fused_ordering(324) 00:14:04.592 fused_ordering(325) 00:14:04.592 fused_ordering(326) 00:14:04.592 fused_ordering(327) 00:14:04.592 fused_ordering(328) 00:14:04.592 fused_ordering(329) 00:14:04.592 fused_ordering(330) 00:14:04.592 fused_ordering(331) 00:14:04.592 fused_ordering(332) 00:14:04.592 fused_ordering(333) 00:14:04.592 fused_ordering(334) 00:14:04.592 fused_ordering(335) 00:14:04.592 fused_ordering(336) 00:14:04.592 fused_ordering(337) 00:14:04.592 fused_ordering(338) 00:14:04.592 fused_ordering(339) 00:14:04.592 fused_ordering(340) 00:14:04.592 fused_ordering(341) 00:14:04.592 fused_ordering(342) 00:14:04.592 fused_ordering(343) 00:14:04.592 fused_ordering(344) 00:14:04.592 fused_ordering(345) 00:14:04.592 fused_ordering(346) 00:14:04.592 fused_ordering(347) 00:14:04.592 fused_ordering(348) 00:14:04.592 fused_ordering(349) 00:14:04.592 fused_ordering(350) 00:14:04.592 fused_ordering(351) 00:14:04.592 fused_ordering(352) 00:14:04.592 fused_ordering(353) 00:14:04.592 fused_ordering(354) 00:14:04.592 fused_ordering(355) 00:14:04.592 fused_ordering(356) 00:14:04.592 fused_ordering(357) 00:14:04.592 fused_ordering(358) 00:14:04.592 fused_ordering(359) 00:14:04.592 fused_ordering(360) 00:14:04.592 fused_ordering(361) 00:14:04.592 fused_ordering(362) 00:14:04.592 fused_ordering(363) 00:14:04.592 fused_ordering(364) 00:14:04.592 fused_ordering(365) 00:14:04.592 fused_ordering(366) 00:14:04.592 fused_ordering(367) 00:14:04.592 fused_ordering(368) 00:14:04.592 fused_ordering(369) 00:14:04.592 fused_ordering(370) 00:14:04.592 fused_ordering(371) 00:14:04.592 fused_ordering(372) 00:14:04.592 fused_ordering(373) 00:14:04.592 fused_ordering(374) 00:14:04.592 fused_ordering(375) 00:14:04.592 fused_ordering(376) 00:14:04.592 fused_ordering(377) 00:14:04.592 fused_ordering(378) 00:14:04.592 fused_ordering(379) 00:14:04.592 fused_ordering(380) 00:14:04.592 fused_ordering(381) 00:14:04.592 fused_ordering(382) 00:14:04.592 fused_ordering(383) 00:14:04.592 fused_ordering(384) 00:14:04.592 fused_ordering(385) 00:14:04.592 fused_ordering(386) 00:14:04.592 fused_ordering(387) 00:14:04.592 fused_ordering(388) 00:14:04.592 fused_ordering(389) 00:14:04.592 fused_ordering(390) 00:14:04.592 fused_ordering(391) 00:14:04.592 fused_ordering(392) 00:14:04.592 fused_ordering(393) 00:14:04.592 fused_ordering(394) 00:14:04.592 fused_ordering(395) 00:14:04.592 fused_ordering(396) 00:14:04.592 fused_ordering(397) 00:14:04.592 fused_ordering(398) 00:14:04.592 fused_ordering(399) 00:14:04.592 fused_ordering(400) 00:14:04.592 fused_ordering(401) 00:14:04.592 fused_ordering(402) 00:14:04.592 fused_ordering(403) 00:14:04.592 fused_ordering(404) 00:14:04.592 fused_ordering(405) 00:14:04.592 fused_ordering(406) 00:14:04.592 fused_ordering(407) 00:14:04.592 fused_ordering(408) 00:14:04.592 fused_ordering(409) 00:14:04.592 fused_ordering(410) 00:14:05.607 fused_ordering(411) 00:14:05.607 fused_ordering(412) 00:14:05.607 fused_ordering(413) 00:14:05.607 fused_ordering(414) 00:14:05.607 fused_ordering(415) 00:14:05.607 fused_ordering(416) 00:14:05.607 fused_ordering(417) 00:14:05.607 fused_ordering(418) 00:14:05.607 fused_ordering(419) 00:14:05.607 fused_ordering(420) 00:14:05.607 fused_ordering(421) 00:14:05.607 fused_ordering(422) 00:14:05.607 fused_ordering(423) 00:14:05.607 fused_ordering(424) 00:14:05.607 fused_ordering(425) 00:14:05.607 fused_ordering(426) 00:14:05.607 fused_ordering(427) 00:14:05.607 fused_ordering(428) 00:14:05.607 fused_ordering(429) 00:14:05.607 fused_ordering(430) 00:14:05.607 fused_ordering(431) 00:14:05.607 fused_ordering(432) 00:14:05.607 fused_ordering(433) 00:14:05.607 fused_ordering(434) 00:14:05.607 fused_ordering(435) 00:14:05.607 fused_ordering(436) 00:14:05.607 fused_ordering(437) 00:14:05.607 fused_ordering(438) 00:14:05.607 fused_ordering(439) 00:14:05.607 fused_ordering(440) 00:14:05.607 fused_ordering(441) 00:14:05.607 fused_ordering(442) 00:14:05.607 fused_ordering(443) 00:14:05.607 fused_ordering(444) 00:14:05.607 fused_ordering(445) 00:14:05.607 fused_ordering(446) 00:14:05.607 fused_ordering(447) 00:14:05.607 fused_ordering(448) 00:14:05.607 fused_ordering(449) 00:14:05.607 fused_ordering(450) 00:14:05.607 fused_ordering(451) 00:14:05.607 fused_ordering(452) 00:14:05.607 fused_ordering(453) 00:14:05.607 fused_ordering(454) 00:14:05.607 fused_ordering(455) 00:14:05.607 fused_ordering(456) 00:14:05.607 fused_ordering(457) 00:14:05.607 fused_ordering(458) 00:14:05.607 fused_ordering(459) 00:14:05.607 fused_ordering(460) 00:14:05.607 fused_ordering(461) 00:14:05.607 fused_ordering(462) 00:14:05.607 fused_ordering(463) 00:14:05.607 fused_ordering(464) 00:14:05.607 fused_ordering(465) 00:14:05.607 fused_ordering(466) 00:14:05.607 fused_ordering(467) 00:14:05.607 fused_ordering(468) 00:14:05.607 fused_ordering(469) 00:14:05.607 fused_ordering(470) 00:14:05.607 fused_ordering(471) 00:14:05.607 fused_ordering(472) 00:14:05.607 fused_ordering(473) 00:14:05.607 fused_ordering(474) 00:14:05.607 fused_ordering(475) 00:14:05.607 fused_ordering(476) 00:14:05.607 fused_ordering(477) 00:14:05.607 fused_ordering(478) 00:14:05.607 fused_ordering(479) 00:14:05.607 fused_ordering(480) 00:14:05.607 fused_ordering(481) 00:14:05.607 fused_ordering(482) 00:14:05.607 fused_ordering(483) 00:14:05.607 fused_ordering(484) 00:14:05.607 fused_ordering(485) 00:14:05.607 fused_ordering(486) 00:14:05.607 fused_ordering(487) 00:14:05.607 fused_ordering(488) 00:14:05.607 fused_ordering(489) 00:14:05.607 fused_ordering(490) 00:14:05.607 fused_ordering(491) 00:14:05.607 fused_ordering(492) 00:14:05.607 fused_ordering(493) 00:14:05.607 fused_ordering(494) 00:14:05.607 fused_ordering(495) 00:14:05.607 fused_ordering(496) 00:14:05.607 fused_ordering(497) 00:14:05.607 fused_ordering(498) 00:14:05.607 fused_ordering(499) 00:14:05.607 fused_ordering(500) 00:14:05.607 fused_ordering(501) 00:14:05.607 fused_ordering(502) 00:14:05.607 fused_ordering(503) 00:14:05.607 fused_ordering(504) 00:14:05.607 fused_ordering(505) 00:14:05.607 fused_ordering(506) 00:14:05.607 fused_ordering(507) 00:14:05.607 fused_ordering(508) 00:14:05.607 fused_ordering(509) 00:14:05.607 fused_ordering(510) 00:14:05.607 fused_ordering(511) 00:14:05.607 fused_ordering(512) 00:14:05.607 fused_ordering(513) 00:14:05.607 fused_ordering(514) 00:14:05.607 fused_ordering(515) 00:14:05.607 fused_ordering(516) 00:14:05.607 fused_ordering(517) 00:14:05.607 fused_ordering(518) 00:14:05.607 fused_ordering(519) 00:14:05.607 fused_ordering(520) 00:14:05.607 fused_ordering(521) 00:14:05.607 fused_ordering(522) 00:14:05.607 fused_ordering(523) 00:14:05.607 fused_ordering(524) 00:14:05.607 fused_ordering(525) 00:14:05.607 fused_ordering(526) 00:14:05.607 fused_ordering(527) 00:14:05.607 fused_ordering(528) 00:14:05.607 fused_ordering(529) 00:14:05.607 fused_ordering(530) 00:14:05.607 fused_ordering(531) 00:14:05.607 fused_ordering(532) 00:14:05.607 fused_ordering(533) 00:14:05.607 fused_ordering(534) 00:14:05.607 fused_ordering(535) 00:14:05.607 fused_ordering(536) 00:14:05.607 fused_ordering(537) 00:14:05.607 fused_ordering(538) 00:14:05.607 fused_ordering(539) 00:14:05.607 fused_ordering(540) 00:14:05.607 fused_ordering(541) 00:14:05.607 fused_ordering(542) 00:14:05.607 fused_ordering(543) 00:14:05.607 fused_ordering(544) 00:14:05.607 fused_ordering(545) 00:14:05.607 fused_ordering(546) 00:14:05.607 fused_ordering(547) 00:14:05.607 fused_ordering(548) 00:14:05.607 fused_ordering(549) 00:14:05.607 fused_ordering(550) 00:14:05.607 fused_ordering(551) 00:14:05.607 fused_ordering(552) 00:14:05.607 fused_ordering(553) 00:14:05.607 fused_ordering(554) 00:14:05.607 fused_ordering(555) 00:14:05.607 fused_ordering(556) 00:14:05.607 fused_ordering(557) 00:14:05.607 fused_ordering(558) 00:14:05.607 fused_ordering(559) 00:14:05.607 fused_ordering(560) 00:14:05.607 fused_ordering(561) 00:14:05.607 fused_ordering(562) 00:14:05.607 fused_ordering(563) 00:14:05.607 fused_ordering(564) 00:14:05.607 fused_ordering(565) 00:14:05.607 fused_ordering(566) 00:14:05.607 fused_ordering(567) 00:14:05.607 fused_ordering(568) 00:14:05.607 fused_ordering(569) 00:14:05.607 fused_ordering(570) 00:14:05.607 fused_ordering(571) 00:14:05.607 fused_ordering(572) 00:14:05.607 fused_ordering(573) 00:14:05.607 fused_ordering(574) 00:14:05.607 fused_ordering(575) 00:14:05.607 fused_ordering(576) 00:14:05.607 fused_ordering(577) 00:14:05.607 fused_ordering(578) 00:14:05.607 fused_ordering(579) 00:14:05.607 fused_ordering(580) 00:14:05.607 fused_ordering(581) 00:14:05.607 fused_ordering(582) 00:14:05.607 fused_ordering(583) 00:14:05.607 fused_ordering(584) 00:14:05.607 fused_ordering(585) 00:14:05.607 fused_ordering(586) 00:14:05.607 fused_ordering(587) 00:14:05.607 fused_ordering(588) 00:14:05.607 fused_ordering(589) 00:14:05.607 fused_ordering(590) 00:14:05.607 fused_ordering(591) 00:14:05.607 fused_ordering(592) 00:14:05.607 fused_ordering(593) 00:14:05.607 fused_ordering(594) 00:14:05.607 fused_ordering(595) 00:14:05.607 fused_ordering(596) 00:14:05.607 fused_ordering(597) 00:14:05.607 fused_ordering(598) 00:14:05.607 fused_ordering(599) 00:14:05.607 fused_ordering(600) 00:14:05.607 fused_ordering(601) 00:14:05.607 fused_ordering(602) 00:14:05.607 fused_ordering(603) 00:14:05.607 fused_ordering(604) 00:14:05.607 fused_ordering(605) 00:14:05.607 fused_ordering(606) 00:14:05.607 fused_ordering(607) 00:14:05.607 fused_ordering(608) 00:14:05.607 fused_ordering(609) 00:14:05.607 fused_ordering(610) 00:14:05.607 fused_ordering(611) 00:14:05.607 fused_ordering(612) 00:14:05.607 fused_ordering(613) 00:14:05.607 fused_ordering(614) 00:14:05.607 fused_ordering(615) 00:14:06.548 fused_ordering(616) 00:14:06.548 fused_ordering(617) 00:14:06.548 fused_ordering(618) 00:14:06.548 fused_ordering(619) 00:14:06.548 fused_ordering(620) 00:14:06.548 fused_ordering(621) 00:14:06.548 fused_ordering(622) 00:14:06.548 fused_ordering(623) 00:14:06.548 fused_ordering(624) 00:14:06.548 fused_ordering(625) 00:14:06.548 fused_ordering(626) 00:14:06.548 fused_ordering(627) 00:14:06.548 fused_ordering(628) 00:14:06.548 fused_ordering(629) 00:14:06.548 fused_ordering(630) 00:14:06.548 fused_ordering(631) 00:14:06.548 fused_ordering(632) 00:14:06.548 fused_ordering(633) 00:14:06.548 fused_ordering(634) 00:14:06.548 fused_ordering(635) 00:14:06.548 fused_ordering(636) 00:14:06.548 fused_ordering(637) 00:14:06.548 fused_ordering(638) 00:14:06.548 fused_ordering(639) 00:14:06.548 fused_ordering(640) 00:14:06.548 fused_ordering(641) 00:14:06.548 fused_ordering(642) 00:14:06.548 fused_ordering(643) 00:14:06.548 fused_ordering(644) 00:14:06.548 fused_ordering(645) 00:14:06.548 fused_ordering(646) 00:14:06.548 fused_ordering(647) 00:14:06.548 fused_ordering(648) 00:14:06.548 fused_ordering(649) 00:14:06.548 fused_ordering(650) 00:14:06.548 fused_ordering(651) 00:14:06.548 fused_ordering(652) 00:14:06.548 fused_ordering(653) 00:14:06.548 fused_ordering(654) 00:14:06.548 fused_ordering(655) 00:14:06.548 fused_ordering(656) 00:14:06.548 fused_ordering(657) 00:14:06.548 fused_ordering(658) 00:14:06.548 fused_ordering(659) 00:14:06.548 fused_ordering(660) 00:14:06.548 fused_ordering(661) 00:14:06.548 fused_ordering(662) 00:14:06.548 fused_ordering(663) 00:14:06.548 fused_ordering(664) 00:14:06.548 fused_ordering(665) 00:14:06.548 fused_ordering(666) 00:14:06.548 fused_ordering(667) 00:14:06.548 fused_ordering(668) 00:14:06.548 fused_ordering(669) 00:14:06.548 fused_ordering(670) 00:14:06.548 fused_ordering(671) 00:14:06.548 fused_ordering(672) 00:14:06.548 fused_ordering(673) 00:14:06.548 fused_ordering(674) 00:14:06.548 fused_ordering(675) 00:14:06.548 fused_ordering(676) 00:14:06.548 fused_ordering(677) 00:14:06.548 fused_ordering(678) 00:14:06.548 fused_ordering(679) 00:14:06.548 fused_ordering(680) 00:14:06.548 fused_ordering(681) 00:14:06.548 fused_ordering(682) 00:14:06.548 fused_ordering(683) 00:14:06.548 fused_ordering(684) 00:14:06.548 fused_ordering(685) 00:14:06.548 fused_ordering(686) 00:14:06.548 fused_ordering(687) 00:14:06.548 fused_ordering(688) 00:14:06.548 fused_ordering(689) 00:14:06.548 fused_ordering(690) 00:14:06.548 fused_ordering(691) 00:14:06.548 fused_ordering(692) 00:14:06.548 fused_ordering(693) 00:14:06.548 fused_ordering(694) 00:14:06.548 fused_ordering(695) 00:14:06.548 fused_ordering(696) 00:14:06.548 fused_ordering(697) 00:14:06.548 fused_ordering(698) 00:14:06.548 fused_ordering(699) 00:14:06.548 fused_ordering(700) 00:14:06.548 fused_ordering(701) 00:14:06.548 fused_ordering(702) 00:14:06.548 fused_ordering(703) 00:14:06.548 fused_ordering(704) 00:14:06.548 fused_ordering(705) 00:14:06.548 fused_ordering(706) 00:14:06.548 fused_ordering(707) 00:14:06.548 fused_ordering(708) 00:14:06.548 fused_ordering(709) 00:14:06.548 fused_ordering(710) 00:14:06.548 fused_ordering(711) 00:14:06.548 fused_ordering(712) 00:14:06.548 fused_ordering(713) 00:14:06.548 fused_ordering(714) 00:14:06.548 fused_ordering(715) 00:14:06.548 fused_ordering(716) 00:14:06.548 fused_ordering(717) 00:14:06.548 fused_ordering(718) 00:14:06.548 fused_ordering(719) 00:14:06.548 fused_ordering(720) 00:14:06.548 fused_ordering(721) 00:14:06.548 fused_ordering(722) 00:14:06.548 fused_ordering(723) 00:14:06.548 fused_ordering(724) 00:14:06.548 fused_ordering(725) 00:14:06.548 fused_ordering(726) 00:14:06.548 fused_ordering(727) 00:14:06.548 fused_ordering(728) 00:14:06.548 fused_ordering(729) 00:14:06.548 fused_ordering(730) 00:14:06.548 fused_ordering(731) 00:14:06.548 fused_ordering(732) 00:14:06.548 fused_ordering(733) 00:14:06.548 fused_ordering(734) 00:14:06.548 fused_ordering(735) 00:14:06.548 fused_ordering(736) 00:14:06.548 fused_ordering(737) 00:14:06.548 fused_ordering(738) 00:14:06.548 fused_ordering(739) 00:14:06.548 fused_ordering(740) 00:14:06.548 fused_ordering(741) 00:14:06.548 fused_ordering(742) 00:14:06.548 fused_ordering(743) 00:14:06.548 fused_ordering(744) 00:14:06.548 fused_ordering(745) 00:14:06.548 fused_ordering(746) 00:14:06.548 fused_ordering(747) 00:14:06.548 fused_ordering(748) 00:14:06.548 fused_ordering(749) 00:14:06.548 fused_ordering(750) 00:14:06.548 fused_ordering(751) 00:14:06.548 fused_ordering(752) 00:14:06.548 fused_ordering(753) 00:14:06.548 fused_ordering(754) 00:14:06.548 fused_ordering(755) 00:14:06.548 fused_ordering(756) 00:14:06.548 fused_ordering(757) 00:14:06.548 fused_ordering(758) 00:14:06.548 fused_ordering(759) 00:14:06.548 fused_ordering(760) 00:14:06.548 fused_ordering(761) 00:14:06.548 fused_ordering(762) 00:14:06.548 fused_ordering(763) 00:14:06.548 fused_ordering(764) 00:14:06.548 fused_ordering(765) 00:14:06.548 fused_ordering(766) 00:14:06.548 fused_ordering(767) 00:14:06.548 fused_ordering(768) 00:14:06.548 fused_ordering(769) 00:14:06.548 fused_ordering(770) 00:14:06.548 fused_ordering(771) 00:14:06.548 fused_ordering(772) 00:14:06.548 fused_ordering(773) 00:14:06.548 fused_ordering(774) 00:14:06.548 fused_ordering(775) 00:14:06.548 fused_ordering(776) 00:14:06.548 fused_ordering(777) 00:14:06.548 fused_ordering(778) 00:14:06.548 fused_ordering(779) 00:14:06.548 fused_ordering(780) 00:14:06.548 fused_ordering(781) 00:14:06.548 fused_ordering(782) 00:14:06.548 fused_ordering(783) 00:14:06.548 fused_ordering(784) 00:14:06.548 fused_ordering(785) 00:14:06.548 fused_ordering(786) 00:14:06.548 fused_ordering(787) 00:14:06.548 fused_ordering(788) 00:14:06.548 fused_ordering(789) 00:14:06.548 fused_ordering(790) 00:14:06.548 fused_ordering(791) 00:14:06.548 fused_ordering(792) 00:14:06.548 fused_ordering(793) 00:14:06.548 fused_ordering(794) 00:14:06.548 fused_ordering(795) 00:14:06.548 fused_ordering(796) 00:14:06.548 fused_ordering(797) 00:14:06.548 fused_ordering(798) 00:14:06.548 fused_ordering(799) 00:14:06.548 fused_ordering(800) 00:14:06.548 fused_ordering(801) 00:14:06.548 fused_ordering(802) 00:14:06.548 fused_ordering(803) 00:14:06.548 fused_ordering(804) 00:14:06.548 fused_ordering(805) 00:14:06.548 fused_ordering(806) 00:14:06.548 fused_ordering(807) 00:14:06.548 fused_ordering(808) 00:14:06.548 fused_ordering(809) 00:14:06.548 fused_ordering(810) 00:14:06.548 fused_ordering(811) 00:14:06.548 fused_ordering(812) 00:14:06.548 fused_ordering(813) 00:14:06.548 fused_ordering(814) 00:14:06.548 fused_ordering(815) 00:14:06.548 fused_ordering(816) 00:14:06.548 fused_ordering(817) 00:14:06.548 fused_ordering(818) 00:14:06.548 fused_ordering(819) 00:14:06.548 fused_ordering(820) 00:14:07.484 fused_ordering(821) 00:14:07.484 fused_ordering(822) 00:14:07.484 fused_ordering(823) 00:14:07.484 fused_ordering(824) 00:14:07.484 fused_ordering(825) 00:14:07.484 fused_ordering(826) 00:14:07.484 fused_ordering(827) 00:14:07.484 fused_ordering(828) 00:14:07.484 fused_ordering(829) 00:14:07.484 fused_ordering(830) 00:14:07.484 fused_ordering(831) 00:14:07.484 fused_ordering(832) 00:14:07.484 fused_ordering(833) 00:14:07.484 fused_ordering(834) 00:14:07.484 fused_ordering(835) 00:14:07.484 fused_ordering(836) 00:14:07.484 fused_ordering(837) 00:14:07.484 fused_ordering(838) 00:14:07.484 fused_ordering(839) 00:14:07.484 fused_ordering(840) 00:14:07.484 fused_ordering(841) 00:14:07.484 fused_ordering(842) 00:14:07.484 fused_ordering(843) 00:14:07.484 fused_ordering(844) 00:14:07.484 fused_ordering(845) 00:14:07.484 fused_ordering(846) 00:14:07.484 fused_ordering(847) 00:14:07.484 fused_ordering(848) 00:14:07.484 fused_ordering(849) 00:14:07.484 fused_ordering(850) 00:14:07.484 fused_ordering(851) 00:14:07.484 fused_ordering(852) 00:14:07.484 fused_ordering(853) 00:14:07.484 fused_ordering(854) 00:14:07.484 fused_ordering(855) 00:14:07.484 fused_ordering(856) 00:14:07.484 fused_ordering(857) 00:14:07.484 fused_ordering(858) 00:14:07.484 fused_ordering(859) 00:14:07.484 fused_ordering(860) 00:14:07.484 fused_ordering(861) 00:14:07.484 fused_ordering(862) 00:14:07.484 fused_ordering(863) 00:14:07.484 fused_ordering(864) 00:14:07.484 fused_ordering(865) 00:14:07.484 fused_ordering(866) 00:14:07.484 fused_ordering(867) 00:14:07.484 fused_ordering(868) 00:14:07.484 fused_ordering(869) 00:14:07.484 fused_ordering(870) 00:14:07.484 fused_ordering(871) 00:14:07.484 fused_ordering(872) 00:14:07.484 fused_ordering(873) 00:14:07.484 fused_ordering(874) 00:14:07.484 fused_ordering(875) 00:14:07.484 fused_ordering(876) 00:14:07.484 fused_ordering(877) 00:14:07.484 fused_ordering(878) 00:14:07.484 fused_ordering(879) 00:14:07.484 fused_ordering(880) 00:14:07.484 fused_ordering(881) 00:14:07.484 fused_ordering(882) 00:14:07.484 fused_ordering(883) 00:14:07.484 fused_ordering(884) 00:14:07.484 fused_ordering(885) 00:14:07.484 fused_ordering(886) 00:14:07.484 fused_ordering(887) 00:14:07.484 fused_ordering(888) 00:14:07.484 fused_ordering(889) 00:14:07.484 fused_ordering(890) 00:14:07.484 fused_ordering(891) 00:14:07.484 fused_ordering(892) 00:14:07.484 fused_ordering(893) 00:14:07.484 fused_ordering(894) 00:14:07.484 fused_ordering(895) 00:14:07.484 fused_ordering(896) 00:14:07.484 fused_ordering(897) 00:14:07.484 fused_ordering(898) 00:14:07.484 fused_ordering(899) 00:14:07.484 fused_ordering(900) 00:14:07.484 fused_ordering(901) 00:14:07.484 fused_ordering(902) 00:14:07.484 fused_ordering(903) 00:14:07.484 fused_ordering(904) 00:14:07.484 fused_ordering(905) 00:14:07.484 fused_ordering(906) 00:14:07.484 fused_ordering(907) 00:14:07.484 fused_ordering(908) 00:14:07.484 fused_ordering(909) 00:14:07.484 fused_ordering(910) 00:14:07.485 fused_ordering(911) 00:14:07.485 fused_ordering(912) 00:14:07.485 fused_ordering(913) 00:14:07.485 fused_ordering(914) 00:14:07.485 fused_ordering(915) 00:14:07.485 fused_ordering(916) 00:14:07.485 fused_ordering(917) 00:14:07.485 fused_ordering(918) 00:14:07.485 fused_ordering(919) 00:14:07.485 fused_ordering(920) 00:14:07.485 fused_ordering(921) 00:14:07.485 fused_ordering(922) 00:14:07.485 fused_ordering(923) 00:14:07.485 fused_ordering(924) 00:14:07.485 fused_ordering(925) 00:14:07.485 fused_ordering(926) 00:14:07.485 fused_ordering(927) 00:14:07.485 fused_ordering(928) 00:14:07.485 fused_ordering(929) 00:14:07.485 fused_ordering(930) 00:14:07.485 fused_ordering(931) 00:14:07.485 fused_ordering(932) 00:14:07.485 fused_ordering(933) 00:14:07.485 fused_ordering(934) 00:14:07.485 fused_ordering(935) 00:14:07.485 fused_ordering(936) 00:14:07.485 fused_ordering(937) 00:14:07.485 fused_ordering(938) 00:14:07.485 fused_ordering(939) 00:14:07.485 fused_ordering(940) 00:14:07.485 fused_ordering(941) 00:14:07.485 fused_ordering(942) 00:14:07.485 fused_ordering(943) 00:14:07.485 fused_ordering(944) 00:14:07.485 fused_ordering(945) 00:14:07.485 fused_ordering(946) 00:14:07.485 fused_ordering(947) 00:14:07.485 fused_ordering(948) 00:14:07.485 fused_ordering(949) 00:14:07.485 fused_ordering(950) 00:14:07.485 fused_ordering(951) 00:14:07.485 fused_ordering(952) 00:14:07.485 fused_ordering(953) 00:14:07.485 fused_ordering(954) 00:14:07.485 fused_ordering(955) 00:14:07.485 fused_ordering(956) 00:14:07.485 fused_ordering(957) 00:14:07.485 fused_ordering(958) 00:14:07.485 fused_ordering(959) 00:14:07.485 fused_ordering(960) 00:14:07.485 fused_ordering(961) 00:14:07.485 fused_ordering(962) 00:14:07.485 fused_ordering(963) 00:14:07.485 fused_ordering(964) 00:14:07.485 fused_ordering(965) 00:14:07.485 fused_ordering(966) 00:14:07.485 fused_ordering(967) 00:14:07.485 fused_ordering(968) 00:14:07.485 fused_ordering(969) 00:14:07.485 fused_ordering(970) 00:14:07.485 fused_ordering(971) 00:14:07.485 fused_ordering(972) 00:14:07.485 fused_ordering(973) 00:14:07.485 fused_ordering(974) 00:14:07.485 fused_ordering(975) 00:14:07.485 fused_ordering(976) 00:14:07.485 fused_ordering(977) 00:14:07.485 fused_ordering(978) 00:14:07.485 fused_ordering(979) 00:14:07.485 fused_ordering(980) 00:14:07.485 fused_ordering(981) 00:14:07.485 fused_ordering(982) 00:14:07.485 fused_ordering(983) 00:14:07.485 fused_ordering(984) 00:14:07.485 fused_ordering(985) 00:14:07.485 fused_ordering(986) 00:14:07.485 fused_ordering(987) 00:14:07.485 fused_ordering(988) 00:14:07.485 fused_ordering(989) 00:14:07.485 fused_ordering(990) 00:14:07.485 fused_ordering(991) 00:14:07.485 fused_ordering(992) 00:14:07.485 fused_ordering(993) 00:14:07.485 fused_ordering(994) 00:14:07.485 fused_ordering(995) 00:14:07.485 fused_ordering(996) 00:14:07.485 fused_ordering(997) 00:14:07.485 fused_ordering(998) 00:14:07.485 fused_ordering(999) 00:14:07.485 fused_ordering(1000) 00:14:07.485 fused_ordering(1001) 00:14:07.485 fused_ordering(1002) 00:14:07.485 fused_ordering(1003) 00:14:07.485 fused_ordering(1004) 00:14:07.485 fused_ordering(1005) 00:14:07.485 fused_ordering(1006) 00:14:07.485 fused_ordering(1007) 00:14:07.485 fused_ordering(1008) 00:14:07.485 fused_ordering(1009) 00:14:07.485 fused_ordering(1010) 00:14:07.485 fused_ordering(1011) 00:14:07.485 fused_ordering(1012) 00:14:07.485 fused_ordering(1013) 00:14:07.485 fused_ordering(1014) 00:14:07.485 fused_ordering(1015) 00:14:07.485 fused_ordering(1016) 00:14:07.485 fused_ordering(1017) 00:14:07.485 fused_ordering(1018) 00:14:07.485 fused_ordering(1019) 00:14:07.485 fused_ordering(1020) 00:14:07.485 fused_ordering(1021) 00:14:07.485 fused_ordering(1022) 00:14:07.485 fused_ordering(1023) 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:07.485 rmmod nvme_tcp 00:14:07.485 rmmod nvme_fabrics 00:14:07.485 rmmod nvme_keyring 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3022528 ']' 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3022528 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3022528 ']' 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3022528 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.485 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3022528 00:14:07.744 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:07.744 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:07.744 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3022528' 00:14:07.744 killing process with pid 3022528 00:14:07.744 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3022528 00:14:07.744 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3022528 00:14:07.744 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.744 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.744 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.744 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.744 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.744 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.744 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.744 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.285 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:10.285 00:14:10.285 real 0m12.930s 00:14:10.285 user 0m8.599s 00:14:10.285 sys 0m7.127s 00:14:10.285 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:10.285 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.285 ************************************ 00:14:10.285 END TEST nvmf_fused_ordering 00:14:10.285 ************************************ 00:14:10.285 21:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:10.285 21:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:10.285 21:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.285 21:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.285 ************************************ 00:14:10.285 START TEST nvmf_ns_masking 00:14:10.285 ************************************ 00:14:10.285 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:10.285 * Looking for test storage... 00:14:10.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.285 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9210e3a0-b180-4856-8cfb-cef45c51b104 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=b541312f-55fd-4062-9ecd-59739fc3f496 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d2991424-8768-4b90-acce-8ed3e3101434 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:10.286 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.577 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.577 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:15.578 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:15.578 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.578 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:15.579 Found net devices under 0000:86:00.0: cvl_0_0 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:15.579 Found net devices under 0000:86:00.1: cvl_0_1 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:15.579 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.580 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:15.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:14:15.840 00:14:15.840 --- 10.0.0.2 ping statistics --- 00:14:15.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.840 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:14:15.840 00:14:15.840 --- 10.0.0.1 ping statistics --- 00:14:15.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.840 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3027032 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3027032 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3027032 ']' 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.840 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.840 [2024-07-24 21:39:23.786358] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:14:15.840 [2024-07-24 21:39:23.786401] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.840 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.840 [2024-07-24 21:39:23.843949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.840 [2024-07-24 21:39:23.921126] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.840 [2024-07-24 21:39:23.921164] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.840 [2024-07-24 21:39:23.921172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.840 [2024-07-24 21:39:23.921178] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.840 [2024-07-24 21:39:23.921184] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.840 [2024-07-24 21:39:23.921201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.778 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.778 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:16.778 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:16.778 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:16.778 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.778 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.778 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:16.778 [2024-07-24 21:39:24.781295] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.778 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:16.778 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:16.778 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:17.038 Malloc1 00:14:17.038 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:17.038 Malloc2 00:14:17.296 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:17.296 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:17.555 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.555 [2024-07-24 21:39:25.649759] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.555 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:17.555 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d2991424-8768-4b90-acce-8ed3e3101434 -a 10.0.0.2 -s 4420 -i 4 00:14:17.814 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:17.814 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:14:17.814 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:14:17.814 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:14:17.814 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.354 [ 0]:0x1 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba324084ed5442e6ac621e3267221ce4 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba324084ed5442e6ac621e3267221ce4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.354 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.354 [ 0]:0x1 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba324084ed5442e6ac621e3267221ce4 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba324084ed5442e6ac621e3267221ce4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.354 [ 1]:0x2 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aab9a3e79b254d76a2a9eb3395c0a2ab 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aab9a3e79b254d76a2a9eb3395c0a2ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.354 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.614 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:20.874 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:20.874 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d2991424-8768-4b90-acce-8ed3e3101434 -a 10.0.0.2 -s 4420 -i 4 00:14:20.874 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:20.874 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:14:20.874 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.874 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 1 ]] 00:14:20.874 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=1 00:14:20.874 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:14:23.414 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:14:23.414 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:14:23.414 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.414 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:14:23.414 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.414 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:14:23.414 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:23.414 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:23.414 [ 0]:0x2 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aab9a3e79b254d76a2a9eb3395c0a2ab 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aab9a3e79b254d76a2a9eb3395c0a2ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:23.414 [ 0]:0x1 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba324084ed5442e6ac621e3267221ce4 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba324084ed5442e6ac621e3267221ce4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:23.414 [ 1]:0x2 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aab9a3e79b254d76a2a9eb3395c0a2ab 00:14:23.414 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aab9a3e79b254d76a2a9eb3395c0a2ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.415 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.674 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:23.674 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:23.674 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:23.674 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:23.674 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.674 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:23.674 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.674 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:23.674 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:23.674 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.674 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:23.675 [ 0]:0x2 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aab9a3e79b254d76a2a9eb3395c0a2ab 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aab9a3e79b254d76a2a9eb3395c0a2ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.675 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.935 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:23.935 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d2991424-8768-4b90-acce-8ed3e3101434 -a 10.0.0.2 -s 4420 -i 4 00:14:23.935 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:23.935 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:14:23.935 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.935 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:14:23.935 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:14:23.935 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:14:26.475 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.476 [ 0]:0x1 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba324084ed5442e6ac621e3267221ce4 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba324084ed5442e6ac621e3267221ce4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.476 [ 1]:0x2 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aab9a3e79b254d76a2a9eb3395c0a2ab 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aab9a3e79b254d76a2a9eb3395c0a2ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.476 [ 0]:0x2 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aab9a3e79b254d76a2a9eb3395c0a2ab 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aab9a3e79b254d76a2a9eb3395c0a2ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:26.476 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:26.736 [2024-07-24 21:39:34.659300] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:26.736 request: 00:14:26.736 { 00:14:26.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.736 "nsid": 2, 00:14:26.736 "host": "nqn.2016-06.io.spdk:host1", 00:14:26.736 "method": "nvmf_ns_remove_host", 00:14:26.736 "req_id": 1 00:14:26.736 } 00:14:26.736 Got JSON-RPC error response 00:14:26.736 response: 00:14:26.736 { 00:14:26.736 "code": -32602, 00:14:26.736 "message": "Invalid parameters" 00:14:26.736 } 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.736 [ 0]:0x2 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aab9a3e79b254d76a2a9eb3395c0a2ab 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aab9a3e79b254d76a2a9eb3395c0a2ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:26.736 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:26.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.997 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3029026 00:14:26.997 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.997 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3029026 /var/tmp/host.sock 00:14:26.997 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:26.997 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3029026 ']' 00:14:26.997 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:26.997 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.997 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:26.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:26.997 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.997 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.997 [2024-07-24 21:39:34.997384] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:14:26.997 [2024-07-24 21:39:34.997430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3029026 ] 00:14:26.997 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.997 [2024-07-24 21:39:35.050404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.256 [2024-07-24 21:39:35.123472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.827 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:27.827 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:27.827 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.087 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:28.087 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9210e3a0-b180-4856-8cfb-cef45c51b104 00:14:28.087 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:28.087 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9210E3A0B18048568CFBCEF45C51B104 -i 00:14:28.348 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid b541312f-55fd-4062-9ecd-59739fc3f496 00:14:28.348 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:28.348 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g B541312F55FD40629ECD59739FC3F496 -i 00:14:28.609 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:28.609 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:28.868 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:28.868 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:29.128 nvme0n1 00:14:29.128 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:29.128 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:29.738 nvme1n2 00:14:29.738 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:29.738 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:29.738 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:29.738 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:29.738 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:29.738 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:29.738 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:29.738 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:29.738 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:30.000 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9210e3a0-b180-4856-8cfb-cef45c51b104 == \9\2\1\0\e\3\a\0\-\b\1\8\0\-\4\8\5\6\-\8\c\f\b\-\c\e\f\4\5\c\5\1\b\1\0\4 ]] 00:14:30.000 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:30.000 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:30.000 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:30.268 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ b541312f-55fd-4062-9ecd-59739fc3f496 == \b\5\4\1\3\1\2\f\-\5\5\f\d\-\4\0\6\2\-\9\e\c\d\-\5\9\7\3\9\f\c\3\f\4\9\6 ]] 00:14:30.268 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3029026 00:14:30.268 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3029026 ']' 00:14:30.268 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3029026 00:14:30.268 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:30.268 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.268 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3029026 00:14:30.268 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:30.268 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:30.268 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3029026' 00:14:30.268 killing process with pid 3029026 00:14:30.268 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3029026 00:14:30.268 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3029026 00:14:30.528 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.788 rmmod nvme_tcp 00:14:30.788 rmmod nvme_fabrics 00:14:30.788 rmmod nvme_keyring 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3027032 ']' 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3027032 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3027032 ']' 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3027032 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3027032 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3027032' 00:14:30.788 killing process with pid 3027032 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3027032 00:14:30.788 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3027032 00:14:31.047 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.047 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.047 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.047 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.047 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.047 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.047 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.047 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.955 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:32.955 00:14:32.955 real 0m23.142s 00:14:32.955 user 0m24.886s 00:14:32.955 sys 0m6.191s 00:14:32.955 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:32.955 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:32.955 ************************************ 00:14:32.955 END TEST nvmf_ns_masking 00:14:32.955 ************************************ 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:33.214 ************************************ 00:14:33.214 START TEST nvmf_nvme_cli 00:14:33.214 ************************************ 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:33.214 * Looking for test storage... 00:14:33.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.214 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:38.493 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:38.493 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:38.493 Found net devices under 0000:86:00.0: cvl_0_0 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:38.493 Found net devices under 0000:86:00.1: cvl_0_1 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.493 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:38.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:14:38.494 00:14:38.494 --- 10.0.0.2 ping statistics --- 00:14:38.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.494 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.400 ms 00:14:38.494 00:14:38.494 --- 10.0.0.1 ping statistics --- 00:14:38.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.494 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3033036 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3033036 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3033036 ']' 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.494 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.494 [2024-07-24 21:39:46.400960] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:14:38.494 [2024-07-24 21:39:46.401002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.494 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.494 [2024-07-24 21:39:46.457941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.494 [2024-07-24 21:39:46.539341] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.494 [2024-07-24 21:39:46.539377] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.494 [2024-07-24 21:39:46.539385] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.494 [2024-07-24 21:39:46.539391] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.494 [2024-07-24 21:39:46.539396] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.494 [2024-07-24 21:39:46.539436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.494 [2024-07-24 21:39:46.539453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.494 [2024-07-24 21:39:46.539541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.494 [2024-07-24 21:39:46.539543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.434 [2024-07-24 21:39:47.263462] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.434 Malloc0 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.434 Malloc1 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.434 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.435 [2024-07-24 21:39:47.344542] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:39.435 00:14:39.435 Discovery Log Number of Records 2, Generation counter 2 00:14:39.435 =====Discovery Log Entry 0====== 00:14:39.435 trtype: tcp 00:14:39.435 adrfam: ipv4 00:14:39.435 subtype: current discovery subsystem 00:14:39.435 treq: not required 00:14:39.435 portid: 0 00:14:39.435 trsvcid: 4420 00:14:39.435 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:39.435 traddr: 10.0.0.2 00:14:39.435 eflags: explicit discovery connections, duplicate discovery information 00:14:39.435 sectype: none 00:14:39.435 =====Discovery Log Entry 1====== 00:14:39.435 trtype: tcp 00:14:39.435 adrfam: ipv4 00:14:39.435 subtype: nvme subsystem 00:14:39.435 treq: not required 00:14:39.435 portid: 0 00:14:39.435 trsvcid: 4420 00:14:39.435 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:39.435 traddr: 10.0.0.2 00:14:39.435 eflags: none 00:14:39.435 sectype: none 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:39.435 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:40.817 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:40.817 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local i=0 00:14:40.817 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.817 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:14:40.817 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:14:40.817 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # sleep 2 00:14:42.727 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:14:42.727 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:14:42.727 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.727 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:14:42.727 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.727 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # return 0 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:42.728 /dev/nvme0n1 ]] 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.728 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:42.988 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:42.988 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.988 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:42.988 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.988 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:42.988 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:42.988 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.988 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:42.988 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:42.988 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.988 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:42.988 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # local i=0 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # return 0 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:43.248 rmmod nvme_tcp 00:14:43.248 rmmod nvme_fabrics 00:14:43.248 rmmod nvme_keyring 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3033036 ']' 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3033036 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3033036 ']' 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3033036 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3033036 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3033036' 00:14:43.248 killing process with pid 3033036 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3033036 00:14:43.248 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3033036 00:14:43.509 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:43.509 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:43.509 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:43.509 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:43.509 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:43.509 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.509 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.509 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:46.051 00:14:46.051 real 0m12.512s 00:14:46.051 user 0m21.377s 00:14:46.051 sys 0m4.444s 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.051 ************************************ 00:14:46.051 END TEST nvmf_nvme_cli 00:14:46.051 ************************************ 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:46.051 ************************************ 00:14:46.051 START TEST nvmf_vfio_user 00:14:46.051 ************************************ 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:46.051 * Looking for test storage... 00:14:46.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.051 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3034335 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3034335' 00:14:46.052 Process pid: 3034335 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3034335 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3034335 ']' 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.052 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:46.052 [2024-07-24 21:39:53.866514] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:14:46.052 [2024-07-24 21:39:53.866565] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.052 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.052 [2024-07-24 21:39:53.922001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.052 [2024-07-24 21:39:54.006452] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.052 [2024-07-24 21:39:54.006481] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.052 [2024-07-24 21:39:54.006488] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.052 [2024-07-24 21:39:54.006494] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.052 [2024-07-24 21:39:54.006499] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.052 [2024-07-24 21:39:54.006543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.052 [2024-07-24 21:39:54.006627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.052 [2024-07-24 21:39:54.006722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.052 [2024-07-24 21:39:54.006723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.622 21:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:46.622 21:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:46.622 21:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:48.006 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:48.006 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:48.006 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:48.006 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:48.006 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:48.006 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:48.006 Malloc1 00:14:48.006 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:48.265 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:48.525 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:48.525 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:48.525 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:48.525 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:48.784 Malloc2 00:14:48.784 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:49.044 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:49.303 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:49.303 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:49.303 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:49.303 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:49.304 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:49.304 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:49.304 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:49.304 [2024-07-24 21:39:57.411605] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:14:49.304 [2024-07-24 21:39:57.411640] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035033 ] 00:14:49.304 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.566 [2024-07-24 21:39:57.440578] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:49.566 [2024-07-24 21:39:57.450402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:49.566 [2024-07-24 21:39:57.450421] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff34bd83000 00:14:49.566 [2024-07-24 21:39:57.451403] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.566 [2024-07-24 21:39:57.452402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.566 [2024-07-24 21:39:57.453415] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.566 [2024-07-24 21:39:57.454417] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.566 [2024-07-24 21:39:57.455428] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.566 [2024-07-24 21:39:57.456431] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.566 [2024-07-24 21:39:57.457439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.566 [2024-07-24 21:39:57.458452] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.566 [2024-07-24 21:39:57.459451] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:49.566 [2024-07-24 21:39:57.459462] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff34bd78000 00:14:49.566 [2024-07-24 21:39:57.460404] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:49.566 [2024-07-24 21:39:57.473011] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:49.566 [2024-07-24 21:39:57.473033] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:49.566 [2024-07-24 21:39:57.475547] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:49.566 [2024-07-24 21:39:57.475587] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:49.566 [2024-07-24 21:39:57.475662] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:49.566 [2024-07-24 21:39:57.475678] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:49.566 [2024-07-24 21:39:57.475684] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:49.566 [2024-07-24 21:39:57.476546] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:49.566 [2024-07-24 21:39:57.476558] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:49.566 [2024-07-24 21:39:57.476564] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:49.566 [2024-07-24 21:39:57.477554] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:49.566 [2024-07-24 21:39:57.477562] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:49.566 [2024-07-24 21:39:57.477572] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:49.566 [2024-07-24 21:39:57.478559] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:49.566 [2024-07-24 21:39:57.478567] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:49.566 [2024-07-24 21:39:57.480049] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:49.566 [2024-07-24 21:39:57.480056] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:49.566 [2024-07-24 21:39:57.480061] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:49.566 [2024-07-24 21:39:57.480066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:49.566 [2024-07-24 21:39:57.480172] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:49.566 [2024-07-24 21:39:57.480182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:49.566 [2024-07-24 21:39:57.480188] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:49.566 [2024-07-24 21:39:57.480574] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:49.566 [2024-07-24 21:39:57.481578] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:49.566 [2024-07-24 21:39:57.482590] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:49.566 [2024-07-24 21:39:57.483587] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:49.566 [2024-07-24 21:39:57.483651] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:49.566 [2024-07-24 21:39:57.484600] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:49.566 [2024-07-24 21:39:57.484607] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:49.566 [2024-07-24 21:39:57.484611] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:49.566 [2024-07-24 21:39:57.484628] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:49.566 [2024-07-24 21:39:57.484639] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:49.566 [2024-07-24 21:39:57.484655] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.566 [2024-07-24 21:39:57.484660] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.566 [2024-07-24 21:39:57.484663] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.566 [2024-07-24 21:39:57.484677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.566 [2024-07-24 21:39:57.484717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:49.566 [2024-07-24 21:39:57.484728] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:49.566 [2024-07-24 21:39:57.484733] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:49.566 [2024-07-24 21:39:57.484736] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:49.566 [2024-07-24 21:39:57.484740] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:49.566 [2024-07-24 21:39:57.484744] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:49.566 [2024-07-24 21:39:57.484748] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:49.566 [2024-07-24 21:39:57.484752] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:49.566 [2024-07-24 21:39:57.484759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:49.566 [2024-07-24 21:39:57.484772] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:49.566 [2024-07-24 21:39:57.484790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:49.566 [2024-07-24 21:39:57.484802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.566 [2024-07-24 21:39:57.484810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.566 [2024-07-24 21:39:57.484817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.566 [2024-07-24 21:39:57.484824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.566 [2024-07-24 21:39:57.484828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:49.566 [2024-07-24 21:39:57.484836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:49.566 [2024-07-24 21:39:57.484844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:49.566 [2024-07-24 21:39:57.484853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:49.566 [2024-07-24 21:39:57.484858] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:49.566 [2024-07-24 21:39:57.484863] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:49.566 [2024-07-24 21:39:57.484870] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:49.566 [2024-07-24 21:39:57.484876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:49.566 [2024-07-24 21:39:57.484884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:49.567 [2024-07-24 21:39:57.484893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:49.567 [2024-07-24 21:39:57.484945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:49.567 [2024-07-24 21:39:57.484955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:49.567 [2024-07-24 21:39:57.484962] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:49.567 [2024-07-24 21:39:57.484965] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:49.567 [2024-07-24 21:39:57.484969] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.567 [2024-07-24 21:39:57.484974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:49.567 [2024-07-24 21:39:57.484988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:49.567 [2024-07-24 21:39:57.484996] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:49.567 [2024-07-24 21:39:57.485004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:49.567 [2024-07-24 21:39:57.485011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:49.567 [2024-07-24 21:39:57.485017] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.567 [2024-07-24 21:39:57.485020] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.567 [2024-07-24 21:39:57.485023] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.567 [2024-07-24 21:39:57.485029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.567 [2024-07-24 21:39:57.485052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:49.567 [2024-07-24 21:39:57.485064] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:49.567 [2024-07-24 21:39:57.485071] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:49.567 [2024-07-24 21:39:57.485077] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.567 [2024-07-24 21:39:57.485080] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.567 [2024-07-24 21:39:57.485083] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.567 [2024-07-24 21:39:57.485089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.567 [2024-07-24 21:39:57.485098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:49.567 [2024-07-24 21:39:57.485106] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:49.567 [2024-07-24 21:39:57.485111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:49.567 [2024-07-24 21:39:57.485118] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:49.567 [2024-07-24 21:39:57.485126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:49.567 [2024-07-24 21:39:57.485130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:49.567 [2024-07-24 21:39:57.485137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:49.567 [2024-07-24 21:39:57.485141] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:49.567 [2024-07-24 21:39:57.485145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:49.567 [2024-07-24 21:39:57.485150] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:49.567 [2024-07-24 21:39:57.485166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:49.567 [2024-07-24 21:39:57.485175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:49.567 [2024-07-24 21:39:57.485185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:49.567 [2024-07-24 21:39:57.485193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:49.567 [2024-07-24 21:39:57.485203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:49.567 [2024-07-24 21:39:57.485214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:49.567 [2024-07-24 21:39:57.485224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:49.567 [2024-07-24 21:39:57.485234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:49.567 [2024-07-24 21:39:57.485245] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:49.567 [2024-07-24 21:39:57.485249] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:49.567 [2024-07-24 21:39:57.485252] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:49.567 [2024-07-24 21:39:57.485255] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:49.567 [2024-07-24 21:39:57.485258] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:49.567 [2024-07-24 21:39:57.485263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:49.567 [2024-07-24 21:39:57.485269] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:49.567 [2024-07-24 21:39:57.485273] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:49.567 [2024-07-24 21:39:57.485276] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.567 [2024-07-24 21:39:57.485281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:49.567 [2024-07-24 21:39:57.485287] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:49.567 [2024-07-24 21:39:57.485291] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.567 [2024-07-24 21:39:57.485294] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.567 [2024-07-24 21:39:57.485299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.567 [2024-07-24 21:39:57.485305] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:49.567 [2024-07-24 21:39:57.485309] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:49.567 [2024-07-24 21:39:57.485313] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.567 [2024-07-24 21:39:57.485318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:49.567 [2024-07-24 21:39:57.485324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:49.567 [2024-07-24 21:39:57.485407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:49.567 [2024-07-24 21:39:57.485417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:49.567 [2024-07-24 21:39:57.485423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:49.567 ===================================================== 00:14:49.567 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:49.567 ===================================================== 00:14:49.567 Controller Capabilities/Features 00:14:49.567 ================================ 00:14:49.567 Vendor ID: 4e58 00:14:49.567 Subsystem Vendor ID: 4e58 00:14:49.567 Serial Number: SPDK1 00:14:49.567 Model Number: SPDK bdev Controller 00:14:49.567 Firmware Version: 24.09 00:14:49.567 Recommended Arb Burst: 6 00:14:49.567 IEEE OUI Identifier: 8d 6b 50 00:14:49.567 Multi-path I/O 00:14:49.567 May have multiple subsystem ports: Yes 00:14:49.567 May have multiple controllers: Yes 00:14:49.567 Associated with SR-IOV VF: No 00:14:49.567 Max Data Transfer Size: 131072 00:14:49.567 Max Number of Namespaces: 32 00:14:49.567 Max Number of I/O Queues: 127 00:14:49.567 NVMe Specification Version (VS): 1.3 00:14:49.567 NVMe Specification Version (Identify): 1.3 00:14:49.567 Maximum Queue Entries: 256 00:14:49.567 Contiguous Queues Required: Yes 00:14:49.567 Arbitration Mechanisms Supported 00:14:49.567 Weighted Round Robin: Not Supported 00:14:49.568 Vendor Specific: Not Supported 00:14:49.568 Reset Timeout: 15000 ms 00:14:49.568 Doorbell Stride: 4 bytes 00:14:49.568 NVM Subsystem Reset: Not Supported 00:14:49.568 Command Sets Supported 00:14:49.568 NVM Command Set: Supported 00:14:49.568 Boot Partition: Not Supported 00:14:49.568 Memory Page Size Minimum: 4096 bytes 00:14:49.568 Memory Page Size Maximum: 4096 bytes 00:14:49.568 Persistent Memory Region: Not Supported 00:14:49.568 Optional Asynchronous Events Supported 00:14:49.568 Namespace Attribute Notices: Supported 00:14:49.568 Firmware Activation Notices: Not Supported 00:14:49.568 ANA Change Notices: Not Supported 00:14:49.568 PLE Aggregate Log Change Notices: Not Supported 00:14:49.568 LBA Status Info Alert Notices: Not Supported 00:14:49.568 EGE Aggregate Log Change Notices: Not Supported 00:14:49.568 Normal NVM Subsystem Shutdown event: Not Supported 00:14:49.568 Zone Descriptor Change Notices: Not Supported 00:14:49.568 Discovery Log Change Notices: Not Supported 00:14:49.568 Controller Attributes 00:14:49.568 128-bit Host Identifier: Supported 00:14:49.568 Non-Operational Permissive Mode: Not Supported 00:14:49.568 NVM Sets: Not Supported 00:14:49.568 Read Recovery Levels: Not Supported 00:14:49.568 Endurance Groups: Not Supported 00:14:49.568 Predictable Latency Mode: Not Supported 00:14:49.568 Traffic Based Keep ALive: Not Supported 00:14:49.568 Namespace Granularity: Not Supported 00:14:49.568 SQ Associations: Not Supported 00:14:49.568 UUID List: Not Supported 00:14:49.568 Multi-Domain Subsystem: Not Supported 00:14:49.568 Fixed Capacity Management: Not Supported 00:14:49.568 Variable Capacity Management: Not Supported 00:14:49.568 Delete Endurance Group: Not Supported 00:14:49.568 Delete NVM Set: Not Supported 00:14:49.568 Extended LBA Formats Supported: Not Supported 00:14:49.568 Flexible Data Placement Supported: Not Supported 00:14:49.568 00:14:49.568 Controller Memory Buffer Support 00:14:49.568 ================================ 00:14:49.568 Supported: No 00:14:49.568 00:14:49.568 Persistent Memory Region Support 00:14:49.568 ================================ 00:14:49.568 Supported: No 00:14:49.568 00:14:49.568 Admin Command Set Attributes 00:14:49.568 ============================ 00:14:49.568 Security Send/Receive: Not Supported 00:14:49.568 Format NVM: Not Supported 00:14:49.568 Firmware Activate/Download: Not Supported 00:14:49.568 Namespace Management: Not Supported 00:14:49.568 Device Self-Test: Not Supported 00:14:49.568 Directives: Not Supported 00:14:49.568 NVMe-MI: Not Supported 00:14:49.568 Virtualization Management: Not Supported 00:14:49.568 Doorbell Buffer Config: Not Supported 00:14:49.568 Get LBA Status Capability: Not Supported 00:14:49.568 Command & Feature Lockdown Capability: Not Supported 00:14:49.568 Abort Command Limit: 4 00:14:49.568 Async Event Request Limit: 4 00:14:49.568 Number of Firmware Slots: N/A 00:14:49.568 Firmware Slot 1 Read-Only: N/A 00:14:49.568 Firmware Activation Without Reset: N/A 00:14:49.568 Multiple Update Detection Support: N/A 00:14:49.568 Firmware Update Granularity: No Information Provided 00:14:49.568 Per-Namespace SMART Log: No 00:14:49.568 Asymmetric Namespace Access Log Page: Not Supported 00:14:49.568 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:49.568 Command Effects Log Page: Supported 00:14:49.568 Get Log Page Extended Data: Supported 00:14:49.568 Telemetry Log Pages: Not Supported 00:14:49.568 Persistent Event Log Pages: Not Supported 00:14:49.568 Supported Log Pages Log Page: May Support 00:14:49.568 Commands Supported & Effects Log Page: Not Supported 00:14:49.568 Feature Identifiers & Effects Log Page:May Support 00:14:49.568 NVMe-MI Commands & Effects Log Page: May Support 00:14:49.568 Data Area 4 for Telemetry Log: Not Supported 00:14:49.568 Error Log Page Entries Supported: 128 00:14:49.568 Keep Alive: Supported 00:14:49.568 Keep Alive Granularity: 10000 ms 00:14:49.568 00:14:49.568 NVM Command Set Attributes 00:14:49.568 ========================== 00:14:49.568 Submission Queue Entry Size 00:14:49.568 Max: 64 00:14:49.568 Min: 64 00:14:49.568 Completion Queue Entry Size 00:14:49.568 Max: 16 00:14:49.568 Min: 16 00:14:49.568 Number of Namespaces: 32 00:14:49.568 Compare Command: Supported 00:14:49.568 Write Uncorrectable Command: Not Supported 00:14:49.568 Dataset Management Command: Supported 00:14:49.568 Write Zeroes Command: Supported 00:14:49.568 Set Features Save Field: Not Supported 00:14:49.568 Reservations: Not Supported 00:14:49.568 Timestamp: Not Supported 00:14:49.568 Copy: Supported 00:14:49.568 Volatile Write Cache: Present 00:14:49.568 Atomic Write Unit (Normal): 1 00:14:49.568 Atomic Write Unit (PFail): 1 00:14:49.568 Atomic Compare & Write Unit: 1 00:14:49.568 Fused Compare & Write: Supported 00:14:49.568 Scatter-Gather List 00:14:49.568 SGL Command Set: Supported (Dword aligned) 00:14:49.568 SGL Keyed: Not Supported 00:14:49.568 SGL Bit Bucket Descriptor: Not Supported 00:14:49.568 SGL Metadata Pointer: Not Supported 00:14:49.568 Oversized SGL: Not Supported 00:14:49.568 SGL Metadata Address: Not Supported 00:14:49.568 SGL Offset: Not Supported 00:14:49.568 Transport SGL Data Block: Not Supported 00:14:49.568 Replay Protected Memory Block: Not Supported 00:14:49.568 00:14:49.568 Firmware Slot Information 00:14:49.568 ========================= 00:14:49.568 Active slot: 1 00:14:49.568 Slot 1 Firmware Revision: 24.09 00:14:49.568 00:14:49.568 00:14:49.568 Commands Supported and Effects 00:14:49.568 ============================== 00:14:49.568 Admin Commands 00:14:49.568 -------------- 00:14:49.568 Get Log Page (02h): Supported 00:14:49.568 Identify (06h): Supported 00:14:49.568 Abort (08h): Supported 00:14:49.568 Set Features (09h): Supported 00:14:49.568 Get Features (0Ah): Supported 00:14:49.568 Asynchronous Event Request (0Ch): Supported 00:14:49.568 Keep Alive (18h): Supported 00:14:49.568 I/O Commands 00:14:49.568 ------------ 00:14:49.568 Flush (00h): Supported LBA-Change 00:14:49.568 Write (01h): Supported LBA-Change 00:14:49.568 Read (02h): Supported 00:14:49.568 Compare (05h): Supported 00:14:49.568 Write Zeroes (08h): Supported LBA-Change 00:14:49.568 Dataset Management (09h): Supported LBA-Change 00:14:49.568 Copy (19h): Supported LBA-Change 00:14:49.568 00:14:49.568 Error Log 00:14:49.568 ========= 00:14:49.568 00:14:49.568 Arbitration 00:14:49.568 =========== 00:14:49.568 Arbitration Burst: 1 00:14:49.568 00:14:49.568 Power Management 00:14:49.568 ================ 00:14:49.568 Number of Power States: 1 00:14:49.568 Current Power State: Power State #0 00:14:49.568 Power State #0: 00:14:49.568 Max Power: 0.00 W 00:14:49.568 Non-Operational State: Operational 00:14:49.568 Entry Latency: Not Reported 00:14:49.568 Exit Latency: Not Reported 00:14:49.568 Relative Read Throughput: 0 00:14:49.568 Relative Read Latency: 0 00:14:49.568 Relative Write Throughput: 0 00:14:49.568 Relative Write Latency: 0 00:14:49.568 Idle Power: Not Reported 00:14:49.568 Active Power: Not Reported 00:14:49.568 Non-Operational Permissive Mode: Not Supported 00:14:49.568 00:14:49.568 Health Information 00:14:49.568 ================== 00:14:49.568 Critical Warnings: 00:14:49.568 Available Spare Space: OK 00:14:49.568 Temperature: OK 00:14:49.568 Device Reliability: OK 00:14:49.568 Read Only: No 00:14:49.568 Volatile Memory Backup: OK 00:14:49.568 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:49.568 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:49.568 Available Spare: 0% 00:14:49.568 Available Sp[2024-07-24 21:39:57.485506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:49.568 [2024-07-24 21:39:57.485515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:49.568 [2024-07-24 21:39:57.485538] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:49.568 [2024-07-24 21:39:57.485546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.568 [2024-07-24 21:39:57.485552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.568 [2024-07-24 21:39:57.485557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.568 [2024-07-24 21:39:57.485562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.568 [2024-07-24 21:39:57.488052] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:49.569 [2024-07-24 21:39:57.488064] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:49.569 [2024-07-24 21:39:57.488617] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:49.569 [2024-07-24 21:39:57.488664] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:49.569 [2024-07-24 21:39:57.488670] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:49.569 [2024-07-24 21:39:57.489628] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:49.569 [2024-07-24 21:39:57.489638] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:49.569 [2024-07-24 21:39:57.489686] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:49.569 [2024-07-24 21:39:57.491654] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:49.569 are Threshold: 0% 00:14:49.569 Life Percentage Used: 0% 00:14:49.569 Data Units Read: 0 00:14:49.569 Data Units Written: 0 00:14:49.569 Host Read Commands: 0 00:14:49.569 Host Write Commands: 0 00:14:49.569 Controller Busy Time: 0 minutes 00:14:49.569 Power Cycles: 0 00:14:49.569 Power On Hours: 0 hours 00:14:49.569 Unsafe Shutdowns: 0 00:14:49.569 Unrecoverable Media Errors: 0 00:14:49.569 Lifetime Error Log Entries: 0 00:14:49.569 Warning Temperature Time: 0 minutes 00:14:49.569 Critical Temperature Time: 0 minutes 00:14:49.569 00:14:49.569 Number of Queues 00:14:49.569 ================ 00:14:49.569 Number of I/O Submission Queues: 127 00:14:49.569 Number of I/O Completion Queues: 127 00:14:49.569 00:14:49.569 Active Namespaces 00:14:49.569 ================= 00:14:49.569 Namespace ID:1 00:14:49.569 Error Recovery Timeout: Unlimited 00:14:49.569 Command Set Identifier: NVM (00h) 00:14:49.569 Deallocate: Supported 00:14:49.569 Deallocated/Unwritten Error: Not Supported 00:14:49.569 Deallocated Read Value: Unknown 00:14:49.569 Deallocate in Write Zeroes: Not Supported 00:14:49.569 Deallocated Guard Field: 0xFFFF 00:14:49.569 Flush: Supported 00:14:49.569 Reservation: Supported 00:14:49.569 Namespace Sharing Capabilities: Multiple Controllers 00:14:49.569 Size (in LBAs): 131072 (0GiB) 00:14:49.569 Capacity (in LBAs): 131072 (0GiB) 00:14:49.569 Utilization (in LBAs): 131072 (0GiB) 00:14:49.569 NGUID: F3F855865B5D4996A1961A36E9F4CF52 00:14:49.569 UUID: f3f85586-5b5d-4996-a196-1a36e9f4cf52 00:14:49.569 Thin Provisioning: Not Supported 00:14:49.569 Per-NS Atomic Units: Yes 00:14:49.569 Atomic Boundary Size (Normal): 0 00:14:49.569 Atomic Boundary Size (PFail): 0 00:14:49.569 Atomic Boundary Offset: 0 00:14:49.569 Maximum Single Source Range Length: 65535 00:14:49.569 Maximum Copy Length: 65535 00:14:49.569 Maximum Source Range Count: 1 00:14:49.569 NGUID/EUI64 Never Reused: No 00:14:49.569 Namespace Write Protected: No 00:14:49.569 Number of LBA Formats: 1 00:14:49.569 Current LBA Format: LBA Format #00 00:14:49.569 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:49.569 00:14:49.569 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:49.569 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.829 [2024-07-24 21:39:57.708861] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:55.151 Initializing NVMe Controllers 00:14:55.151 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:55.151 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:55.151 Initialization complete. Launching workers. 00:14:55.151 ======================================================== 00:14:55.151 Latency(us) 00:14:55.151 Device Information : IOPS MiB/s Average min max 00:14:55.151 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39897.15 155.85 3209.68 957.93 6789.92 00:14:55.151 ======================================================== 00:14:55.152 Total : 39897.15 155.85 3209.68 957.93 6789.92 00:14:55.152 00:14:55.152 [2024-07-24 21:40:02.733589] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.152 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:55.152 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.152 [2024-07-24 21:40:02.945589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.432 Initializing NVMe Controllers 00:15:00.432 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:00.432 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:00.432 Initialization complete. Launching workers. 00:15:00.432 ======================================================== 00:15:00.432 Latency(us) 00:15:00.432 Device Information : IOPS MiB/s Average min max 00:15:00.432 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.28 62.71 7978.27 5985.38 8977.28 00:15:00.432 ======================================================== 00:15:00.432 Total : 16054.28 62.71 7978.27 5985.38 8977.28 00:15:00.432 00:15:00.432 [2024-07-24 21:40:07.987195] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.432 21:40:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:00.432 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.432 [2024-07-24 21:40:08.186161] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:05.715 [2024-07-24 21:40:13.255323] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:05.715 Initializing NVMe Controllers 00:15:05.715 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:05.715 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:05.715 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:05.715 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:05.715 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:05.715 Initialization complete. Launching workers. 00:15:05.715 Starting thread on core 2 00:15:05.715 Starting thread on core 3 00:15:05.715 Starting thread on core 1 00:15:05.715 21:40:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:05.715 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.715 [2024-07-24 21:40:13.536426] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.912 [2024-07-24 21:40:17.381588] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.912 Initializing NVMe Controllers 00:15:09.912 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.912 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.912 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:09.912 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:09.912 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:09.912 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:09.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:09.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:09.912 Initialization complete. Launching workers. 00:15:09.912 Starting thread on core 1 with urgent priority queue 00:15:09.912 Starting thread on core 2 with urgent priority queue 00:15:09.912 Starting thread on core 3 with urgent priority queue 00:15:09.912 Starting thread on core 0 with urgent priority queue 00:15:09.912 SPDK bdev Controller (SPDK1 ) core 0: 8633.33 IO/s 11.58 secs/100000 ios 00:15:09.912 SPDK bdev Controller (SPDK1 ) core 1: 6063.00 IO/s 16.49 secs/100000 ios 00:15:09.912 SPDK bdev Controller (SPDK1 ) core 2: 7689.33 IO/s 13.01 secs/100000 ios 00:15:09.912 SPDK bdev Controller (SPDK1 ) core 3: 9180.33 IO/s 10.89 secs/100000 ios 00:15:09.912 ======================================================== 00:15:09.912 00:15:09.912 21:40:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:09.912 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.912 [2024-07-24 21:40:17.656511] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.912 Initializing NVMe Controllers 00:15:09.912 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.912 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.912 Namespace ID: 1 size: 0GB 00:15:09.912 Initialization complete. 00:15:09.912 INFO: using host memory buffer for IO 00:15:09.912 Hello world! 00:15:09.912 [2024-07-24 21:40:17.692729] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.912 21:40:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:09.912 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.912 [2024-07-24 21:40:17.957429] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.296 Initializing NVMe Controllers 00:15:11.296 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.296 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.296 Initialization complete. Launching workers. 00:15:11.296 submit (in ns) avg, min, max = 7616.9, 3220.0, 3999076.5 00:15:11.296 complete (in ns) avg, min, max = 19417.7, 1763.5, 3998184.3 00:15:11.296 00:15:11.296 Submit histogram 00:15:11.296 ================ 00:15:11.296 Range in us Cumulative Count 00:15:11.296 3.214 - 3.228: 0.0061% ( 1) 00:15:11.296 3.242 - 3.256: 0.0184% ( 2) 00:15:11.296 3.270 - 3.283: 0.0735% ( 9) 00:15:11.296 3.283 - 3.297: 0.6002% ( 86) 00:15:11.296 3.297 - 3.311: 2.9950% ( 391) 00:15:11.296 3.311 - 3.325: 7.8336% ( 790) 00:15:11.296 3.325 - 3.339: 12.8683% ( 822) 00:15:11.296 3.339 - 3.353: 18.7971% ( 968) 00:15:11.296 3.353 - 3.367: 25.4486% ( 1086) 00:15:11.296 3.367 - 3.381: 31.5183% ( 991) 00:15:11.296 3.381 - 3.395: 36.7244% ( 850) 00:15:11.296 3.395 - 3.409: 42.3409% ( 917) 00:15:11.296 3.409 - 3.423: 46.8488% ( 736) 00:15:11.296 3.423 - 3.437: 50.8850% ( 659) 00:15:11.296 3.437 - 3.450: 56.4341% ( 906) 00:15:11.296 3.450 - 3.464: 63.1347% ( 1094) 00:15:11.296 3.464 - 3.478: 68.2244% ( 831) 00:15:11.296 3.478 - 3.492: 72.5057% ( 699) 00:15:11.296 3.492 - 3.506: 77.8894% ( 879) 00:15:11.296 3.506 - 3.520: 81.7603% ( 632) 00:15:11.296 3.520 - 3.534: 84.3082% ( 416) 00:15:11.296 3.534 - 3.548: 85.9803% ( 273) 00:15:11.296 3.548 - 3.562: 86.8133% ( 136) 00:15:11.296 3.562 - 3.590: 87.7197% ( 148) 00:15:11.296 3.590 - 3.617: 88.8834% ( 190) 00:15:11.296 3.617 - 3.645: 90.6290% ( 285) 00:15:11.296 3.645 - 3.673: 92.4297% ( 294) 00:15:11.296 3.673 - 3.701: 94.2059% ( 290) 00:15:11.296 3.701 - 3.729: 95.7861% ( 258) 00:15:11.296 3.729 - 3.757: 97.4092% ( 265) 00:15:11.296 3.757 - 3.784: 98.4810% ( 175) 00:15:11.296 3.784 - 3.812: 98.9894% ( 83) 00:15:11.296 3.812 - 3.840: 99.3630% ( 61) 00:15:11.296 3.840 - 3.868: 99.5468% ( 30) 00:15:11.296 3.868 - 3.896: 99.6141% ( 11) 00:15:11.296 3.896 - 3.923: 99.6448% ( 5) 00:15:11.296 3.923 - 3.951: 99.6509% ( 1) 00:15:11.296 4.035 - 4.063: 99.6570% ( 1) 00:15:11.296 4.118 - 4.146: 99.6631% ( 1) 00:15:11.296 4.174 - 4.202: 99.6693% ( 1) 00:15:11.296 5.231 - 5.259: 99.6754% ( 1) 00:15:11.296 5.398 - 5.426: 99.6815% ( 1) 00:15:11.296 5.537 - 5.565: 99.6876% ( 1) 00:15:11.296 5.621 - 5.649: 99.6999% ( 2) 00:15:11.296 5.677 - 5.704: 99.7121% ( 2) 00:15:11.296 5.704 - 5.732: 99.7183% ( 1) 00:15:11.296 5.955 - 5.983: 99.7305% ( 2) 00:15:11.296 6.205 - 6.233: 99.7366% ( 1) 00:15:11.296 6.233 - 6.261: 99.7428% ( 1) 00:15:11.296 6.344 - 6.372: 99.7489% ( 1) 00:15:11.296 6.372 - 6.400: 99.7550% ( 1) 00:15:11.296 6.483 - 6.511: 99.7673% ( 2) 00:15:11.296 6.539 - 6.567: 99.7734% ( 1) 00:15:11.296 6.567 - 6.595: 99.7795% ( 1) 00:15:11.296 6.623 - 6.650: 99.7856% ( 1) 00:15:11.296 6.678 - 6.706: 99.7918% ( 1) 00:15:11.296 6.817 - 6.845: 99.7979% ( 1) 00:15:11.296 6.901 - 6.929: 99.8040% ( 1) 00:15:11.296 6.984 - 7.012: 99.8101% ( 1) 00:15:11.296 7.012 - 7.040: 99.8224% ( 2) 00:15:11.296 7.123 - 7.179: 99.8285% ( 1) 00:15:11.296 7.179 - 7.235: 99.8408% ( 2) 00:15:11.296 7.235 - 7.290: 99.8530% ( 2) 00:15:11.296 7.346 - 7.402: 99.8591% ( 1) 00:15:11.296 7.457 - 7.513: 99.8653% ( 1) 00:15:11.296 7.847 - 7.903: 99.8775% ( 2) 00:15:11.296 7.903 - 7.958: 99.8836% ( 1) 00:15:11.296 8.960 - 9.016: 99.8898% ( 1) 00:15:11.296 10.741 - 10.797: 99.8959% ( 1) 00:15:11.296 3989.148 - 4017.642: 100.0000% ( 17) 00:15:11.296 00:15:11.296 Complete histogram 00:15:11.296 ================== 00:15:11.296 Range in us Cumulative Count 00:15:11.296 1.760 - 1.767: 0.0306% ( 5) 00:15:11.296 1.767 - 1.774: 0.0612% ( 5) 00:15:11.296 1.774 - 1.781: 0.0857% ( 4) 00:15:11.296 1.781 - 1.795: 0.1164% ( 5) 00:15:11.296 1.795 - 1.809: 0.2389% ( 20) 00:15:11.296 1.809 - 1.823: 8.2379% ( 1306) 00:15:11.296 1.823 - 1.837: 36.9082% ( 4681) 00:15:11.296 1.837 - [2024-07-24 21:40:18.982357] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.296 1.850: 45.7892% ( 1450) 00:15:11.296 1.850 - 1.864: 48.2207% ( 397) 00:15:11.296 1.864 - 1.878: 58.4002% ( 1662) 00:15:11.296 1.878 - 1.892: 85.5699% ( 4436) 00:15:11.296 1.892 - 1.906: 94.3529% ( 1434) 00:15:11.296 1.906 - 1.920: 96.5823% ( 364) 00:15:11.296 1.920 - 1.934: 97.6236% ( 170) 00:15:11.296 1.934 - 1.948: 98.1319% ( 83) 00:15:11.296 1.948 - 1.962: 98.7505% ( 101) 00:15:11.296 1.962 - 1.976: 99.1058% ( 58) 00:15:11.296 1.976 - 1.990: 99.2160% ( 18) 00:15:11.296 1.990 - 2.003: 99.2834% ( 11) 00:15:11.296 2.003 - 2.017: 99.3201% ( 6) 00:15:11.296 2.017 - 2.031: 99.3263% ( 1) 00:15:11.296 2.045 - 2.059: 99.3324% ( 1) 00:15:11.296 2.337 - 2.351: 99.3385% ( 1) 00:15:11.296 3.297 - 3.311: 99.3446% ( 1) 00:15:11.296 3.339 - 3.353: 99.3508% ( 1) 00:15:11.296 3.464 - 3.478: 99.3569% ( 1) 00:15:11.296 3.617 - 3.645: 99.3630% ( 1) 00:15:11.296 3.757 - 3.784: 99.3691% ( 1) 00:15:11.296 3.784 - 3.812: 99.3753% ( 1) 00:15:11.296 3.812 - 3.840: 99.3814% ( 1) 00:15:11.296 3.979 - 4.007: 99.3875% ( 1) 00:15:11.296 4.313 - 4.341: 99.3936% ( 1) 00:15:11.296 4.341 - 4.369: 99.3998% ( 1) 00:15:11.296 4.508 - 4.536: 99.4059% ( 1) 00:15:11.296 4.536 - 4.563: 99.4120% ( 1) 00:15:11.296 4.730 - 4.758: 99.4181% ( 1) 00:15:11.296 4.758 - 4.786: 99.4243% ( 1) 00:15:11.296 4.786 - 4.814: 99.4304% ( 1) 00:15:11.296 4.814 - 4.842: 99.4365% ( 1) 00:15:11.296 4.953 - 4.981: 99.4426% ( 1) 00:15:11.296 4.981 - 5.009: 99.4488% ( 1) 00:15:11.296 5.037 - 5.064: 99.4549% ( 1) 00:15:11.296 5.120 - 5.148: 99.4610% ( 1) 00:15:11.296 5.176 - 5.203: 99.4671% ( 1) 00:15:11.296 5.203 - 5.231: 99.4733% ( 1) 00:15:11.296 5.343 - 5.370: 99.4794% ( 1) 00:15:11.296 5.370 - 5.398: 99.4855% ( 1) 00:15:11.296 5.426 - 5.454: 99.4916% ( 1) 00:15:11.296 5.510 - 5.537: 99.4978% ( 1) 00:15:11.296 5.649 - 5.677: 99.5039% ( 1) 00:15:11.296 5.677 - 5.704: 99.5100% ( 1) 00:15:11.296 5.760 - 5.788: 99.5161% ( 1) 00:15:11.296 6.177 - 6.205: 99.5284% ( 2) 00:15:11.296 6.567 - 6.595: 99.5345% ( 1) 00:15:11.296 6.873 - 6.901: 99.5406% ( 1) 00:15:11.296 10.630 - 10.685: 99.5468% ( 1) 00:15:11.296 181.649 - 182.539: 99.5529% ( 1) 00:15:11.296 1146.880 - 1154.003: 99.5590% ( 1) 00:15:11.296 1617.030 - 1624.153: 99.5651% ( 1) 00:15:11.296 3989.148 - 4017.642: 100.0000% ( 71) 00:15:11.296 00:15:11.296 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:11.296 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:11.296 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:11.296 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:11.297 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.297 [ 00:15:11.297 { 00:15:11.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:11.297 "subtype": "Discovery", 00:15:11.297 "listen_addresses": [], 00:15:11.297 "allow_any_host": true, 00:15:11.297 "hosts": [] 00:15:11.297 }, 00:15:11.297 { 00:15:11.297 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:11.297 "subtype": "NVMe", 00:15:11.297 "listen_addresses": [ 00:15:11.297 { 00:15:11.297 "trtype": "VFIOUSER", 00:15:11.297 "adrfam": "IPv4", 00:15:11.297 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:11.297 "trsvcid": "0" 00:15:11.297 } 00:15:11.297 ], 00:15:11.297 "allow_any_host": true, 00:15:11.297 "hosts": [], 00:15:11.297 "serial_number": "SPDK1", 00:15:11.297 "model_number": "SPDK bdev Controller", 00:15:11.297 "max_namespaces": 32, 00:15:11.297 "min_cntlid": 1, 00:15:11.297 "max_cntlid": 65519, 00:15:11.297 "namespaces": [ 00:15:11.297 { 00:15:11.297 "nsid": 1, 00:15:11.297 "bdev_name": "Malloc1", 00:15:11.297 "name": "Malloc1", 00:15:11.297 "nguid": "F3F855865B5D4996A1961A36E9F4CF52", 00:15:11.297 "uuid": "f3f85586-5b5d-4996-a196-1a36e9f4cf52" 00:15:11.297 } 00:15:11.297 ] 00:15:11.297 }, 00:15:11.297 { 00:15:11.297 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:11.297 "subtype": "NVMe", 00:15:11.297 "listen_addresses": [ 00:15:11.297 { 00:15:11.297 "trtype": "VFIOUSER", 00:15:11.297 "adrfam": "IPv4", 00:15:11.297 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:11.297 "trsvcid": "0" 00:15:11.297 } 00:15:11.297 ], 00:15:11.297 "allow_any_host": true, 00:15:11.297 "hosts": [], 00:15:11.297 "serial_number": "SPDK2", 00:15:11.297 "model_number": "SPDK bdev Controller", 00:15:11.297 "max_namespaces": 32, 00:15:11.297 "min_cntlid": 1, 00:15:11.297 "max_cntlid": 65519, 00:15:11.297 "namespaces": [ 00:15:11.297 { 00:15:11.297 "nsid": 1, 00:15:11.297 "bdev_name": "Malloc2", 00:15:11.297 "name": "Malloc2", 00:15:11.297 "nguid": "680612B18F0A40D78D68B8D3A9C63CDB", 00:15:11.297 "uuid": "680612b1-8f0a-40d7-8d68-b8d3a9c63cdb" 00:15:11.297 } 00:15:11.297 ] 00:15:11.297 } 00:15:11.297 ] 00:15:11.297 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:11.297 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3038495 00:15:11.297 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:11.297 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:11.297 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:15:11.297 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.297 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.297 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:15:11.297 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:11.297 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:11.297 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.297 [2024-07-24 21:40:19.355492] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.297 Malloc3 00:15:11.557 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:11.557 [2024-07-24 21:40:19.589314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.557 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.557 Asynchronous Event Request test 00:15:11.557 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.557 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.557 Registering asynchronous event callbacks... 00:15:11.557 Starting namespace attribute notice tests for all controllers... 00:15:11.557 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:11.557 aer_cb - Changed Namespace 00:15:11.557 Cleaning up... 00:15:11.817 [ 00:15:11.817 { 00:15:11.817 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:11.817 "subtype": "Discovery", 00:15:11.817 "listen_addresses": [], 00:15:11.817 "allow_any_host": true, 00:15:11.817 "hosts": [] 00:15:11.817 }, 00:15:11.817 { 00:15:11.817 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:11.817 "subtype": "NVMe", 00:15:11.817 "listen_addresses": [ 00:15:11.817 { 00:15:11.817 "trtype": "VFIOUSER", 00:15:11.817 "adrfam": "IPv4", 00:15:11.817 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:11.817 "trsvcid": "0" 00:15:11.817 } 00:15:11.817 ], 00:15:11.817 "allow_any_host": true, 00:15:11.817 "hosts": [], 00:15:11.817 "serial_number": "SPDK1", 00:15:11.817 "model_number": "SPDK bdev Controller", 00:15:11.817 "max_namespaces": 32, 00:15:11.817 "min_cntlid": 1, 00:15:11.817 "max_cntlid": 65519, 00:15:11.817 "namespaces": [ 00:15:11.817 { 00:15:11.817 "nsid": 1, 00:15:11.817 "bdev_name": "Malloc1", 00:15:11.817 "name": "Malloc1", 00:15:11.817 "nguid": "F3F855865B5D4996A1961A36E9F4CF52", 00:15:11.817 "uuid": "f3f85586-5b5d-4996-a196-1a36e9f4cf52" 00:15:11.817 }, 00:15:11.817 { 00:15:11.817 "nsid": 2, 00:15:11.817 "bdev_name": "Malloc3", 00:15:11.817 "name": "Malloc3", 00:15:11.817 "nguid": "E8E996973A9D46E0BBABC34D4FDA6C4C", 00:15:11.817 "uuid": "e8e99697-3a9d-46e0-bbab-c34d4fda6c4c" 00:15:11.817 } 00:15:11.817 ] 00:15:11.817 }, 00:15:11.817 { 00:15:11.817 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:11.817 "subtype": "NVMe", 00:15:11.817 "listen_addresses": [ 00:15:11.817 { 00:15:11.817 "trtype": "VFIOUSER", 00:15:11.817 "adrfam": "IPv4", 00:15:11.817 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:11.817 "trsvcid": "0" 00:15:11.817 } 00:15:11.817 ], 00:15:11.817 "allow_any_host": true, 00:15:11.817 "hosts": [], 00:15:11.817 "serial_number": "SPDK2", 00:15:11.817 "model_number": "SPDK bdev Controller", 00:15:11.817 "max_namespaces": 32, 00:15:11.817 "min_cntlid": 1, 00:15:11.817 "max_cntlid": 65519, 00:15:11.817 "namespaces": [ 00:15:11.817 { 00:15:11.818 "nsid": 1, 00:15:11.818 "bdev_name": "Malloc2", 00:15:11.818 "name": "Malloc2", 00:15:11.818 "nguid": "680612B18F0A40D78D68B8D3A9C63CDB", 00:15:11.818 "uuid": "680612b1-8f0a-40d7-8d68-b8d3a9c63cdb" 00:15:11.818 } 00:15:11.818 ] 00:15:11.818 } 00:15:11.818 ] 00:15:11.818 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3038495 00:15:11.818 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:11.818 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:11.818 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:11.818 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:11.818 [2024-07-24 21:40:19.820546] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:15:11.818 [2024-07-24 21:40:19.820579] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3038715 ] 00:15:11.818 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.818 [2024-07-24 21:40:19.848427] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:11.818 [2024-07-24 21:40:19.854261] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:11.818 [2024-07-24 21:40:19.854281] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff8b7afd000 00:15:11.818 [2024-07-24 21:40:19.855260] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.818 [2024-07-24 21:40:19.856266] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.818 [2024-07-24 21:40:19.857272] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.818 [2024-07-24 21:40:19.858280] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.818 [2024-07-24 21:40:19.859289] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.818 [2024-07-24 21:40:19.860295] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.818 [2024-07-24 21:40:19.861301] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.818 [2024-07-24 21:40:19.862314] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.818 [2024-07-24 21:40:19.863320] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:11.818 [2024-07-24 21:40:19.863330] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff8b7af2000 00:15:11.818 [2024-07-24 21:40:19.864269] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:11.818 [2024-07-24 21:40:19.876786] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:11.818 [2024-07-24 21:40:19.876809] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:11.818 [2024-07-24 21:40:19.881891] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:11.818 [2024-07-24 21:40:19.881931] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:11.818 [2024-07-24 21:40:19.882003] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:11.818 [2024-07-24 21:40:19.882017] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:11.818 [2024-07-24 21:40:19.882022] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:11.818 [2024-07-24 21:40:19.882894] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:11.818 [2024-07-24 21:40:19.882906] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:11.818 [2024-07-24 21:40:19.882912] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:11.818 [2024-07-24 21:40:19.883901] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:11.818 [2024-07-24 21:40:19.883910] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:11.818 [2024-07-24 21:40:19.883917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:11.818 [2024-07-24 21:40:19.884909] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:11.818 [2024-07-24 21:40:19.884918] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:11.818 [2024-07-24 21:40:19.885916] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:11.818 [2024-07-24 21:40:19.885925] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:11.818 [2024-07-24 21:40:19.885930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:11.818 [2024-07-24 21:40:19.885935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:11.818 [2024-07-24 21:40:19.886041] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:11.818 [2024-07-24 21:40:19.886050] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:11.818 [2024-07-24 21:40:19.886054] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:11.818 [2024-07-24 21:40:19.886926] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:11.818 [2024-07-24 21:40:19.887928] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:11.818 [2024-07-24 21:40:19.888934] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:11.818 [2024-07-24 21:40:19.889940] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.818 [2024-07-24 21:40:19.889974] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:11.818 [2024-07-24 21:40:19.890953] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:11.818 [2024-07-24 21:40:19.890961] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:11.818 [2024-07-24 21:40:19.890966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:11.818 [2024-07-24 21:40:19.890983] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:11.818 [2024-07-24 21:40:19.890990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:11.818 [2024-07-24 21:40:19.891001] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.818 [2024-07-24 21:40:19.891005] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.818 [2024-07-24 21:40:19.891009] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:11.818 [2024-07-24 21:40:19.891020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.818 [2024-07-24 21:40:19.895048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:11.818 [2024-07-24 21:40:19.895059] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:11.818 [2024-07-24 21:40:19.895063] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:11.818 [2024-07-24 21:40:19.895067] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:11.818 [2024-07-24 21:40:19.895072] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:11.818 [2024-07-24 21:40:19.895076] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:11.818 [2024-07-24 21:40:19.895081] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:11.818 [2024-07-24 21:40:19.895085] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:11.818 [2024-07-24 21:40:19.895091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:11.818 [2024-07-24 21:40:19.895103] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:11.818 [2024-07-24 21:40:19.903049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:11.818 [2024-07-24 21:40:19.903064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.818 [2024-07-24 21:40:19.903072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.818 [2024-07-24 21:40:19.903079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.818 [2024-07-24 21:40:19.903086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.818 [2024-07-24 21:40:19.903090] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:11.819 [2024-07-24 21:40:19.903098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:11.819 [2024-07-24 21:40:19.903106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:11.819 [2024-07-24 21:40:19.911047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:11.819 [2024-07-24 21:40:19.911055] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:11.819 [2024-07-24 21:40:19.911060] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:11.819 [2024-07-24 21:40:19.911068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:11.819 [2024-07-24 21:40:19.911074] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:11.819 [2024-07-24 21:40:19.911081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:11.819 [2024-07-24 21:40:19.919047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:11.819 [2024-07-24 21:40:19.919101] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:11.819 [2024-07-24 21:40:19.919109] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:11.819 [2024-07-24 21:40:19.919116] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:11.819 [2024-07-24 21:40:19.919120] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:11.819 [2024-07-24 21:40:19.919123] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:11.819 [2024-07-24 21:40:19.919129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:11.819 [2024-07-24 21:40:19.927049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:11.819 [2024-07-24 21:40:19.927059] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:11.819 [2024-07-24 21:40:19.927070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:11.819 [2024-07-24 21:40:19.927077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:11.819 [2024-07-24 21:40:19.927087] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.819 [2024-07-24 21:40:19.927091] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.819 [2024-07-24 21:40:19.927094] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:11.819 [2024-07-24 21:40:19.927100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.080 [2024-07-24 21:40:19.935050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:12.080 [2024-07-24 21:40:19.935065] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:12.080 [2024-07-24 21:40:19.935072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:12.080 [2024-07-24 21:40:19.935079] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:12.080 [2024-07-24 21:40:19.935083] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.080 [2024-07-24 21:40:19.935086] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.080 [2024-07-24 21:40:19.935092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.080 [2024-07-24 21:40:19.943051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:12.080 [2024-07-24 21:40:19.943060] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:12.080 [2024-07-24 21:40:19.943066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:12.080 [2024-07-24 21:40:19.943073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:12.080 [2024-07-24 21:40:19.943080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:12.080 [2024-07-24 21:40:19.943084] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:12.080 [2024-07-24 21:40:19.943089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:12.080 [2024-07-24 21:40:19.943094] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:12.080 [2024-07-24 21:40:19.943098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:12.080 [2024-07-24 21:40:19.943103] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:12.080 [2024-07-24 21:40:19.943118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:12.080 [2024-07-24 21:40:19.951049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:12.080 [2024-07-24 21:40:19.951061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:12.080 [2024-07-24 21:40:19.959047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:12.080 [2024-07-24 21:40:19.959061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:12.080 [2024-07-24 21:40:19.967046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:12.080 [2024-07-24 21:40:19.967060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:12.080 [2024-07-24 21:40:19.972048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:12.080 [2024-07-24 21:40:19.972064] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:12.080 [2024-07-24 21:40:19.972068] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:12.080 [2024-07-24 21:40:19.972071] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:12.080 [2024-07-24 21:40:19.972074] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:12.080 [2024-07-24 21:40:19.972077] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:12.080 [2024-07-24 21:40:19.972083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:12.080 [2024-07-24 21:40:19.972090] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:12.080 [2024-07-24 21:40:19.972094] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:12.080 [2024-07-24 21:40:19.972097] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.080 [2024-07-24 21:40:19.972102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:12.080 [2024-07-24 21:40:19.972108] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:12.080 [2024-07-24 21:40:19.972112] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.080 [2024-07-24 21:40:19.972115] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.080 [2024-07-24 21:40:19.972120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.080 [2024-07-24 21:40:19.972127] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:12.080 [2024-07-24 21:40:19.972131] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:12.080 [2024-07-24 21:40:19.972134] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.080 [2024-07-24 21:40:19.972139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:12.080 [2024-07-24 21:40:19.983048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:12.080 [2024-07-24 21:40:19.983062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:12.080 [2024-07-24 21:40:19.983071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:12.081 [2024-07-24 21:40:19.983077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:12.081 ===================================================== 00:15:12.081 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:12.081 ===================================================== 00:15:12.081 Controller Capabilities/Features 00:15:12.081 ================================ 00:15:12.081 Vendor ID: 4e58 00:15:12.081 Subsystem Vendor ID: 4e58 00:15:12.081 Serial Number: SPDK2 00:15:12.081 Model Number: SPDK bdev Controller 00:15:12.081 Firmware Version: 24.09 00:15:12.081 Recommended Arb Burst: 6 00:15:12.081 IEEE OUI Identifier: 8d 6b 50 00:15:12.081 Multi-path I/O 00:15:12.081 May have multiple subsystem ports: Yes 00:15:12.081 May have multiple controllers: Yes 00:15:12.081 Associated with SR-IOV VF: No 00:15:12.081 Max Data Transfer Size: 131072 00:15:12.081 Max Number of Namespaces: 32 00:15:12.081 Max Number of I/O Queues: 127 00:15:12.081 NVMe Specification Version (VS): 1.3 00:15:12.081 NVMe Specification Version (Identify): 1.3 00:15:12.081 Maximum Queue Entries: 256 00:15:12.081 Contiguous Queues Required: Yes 00:15:12.081 Arbitration Mechanisms Supported 00:15:12.081 Weighted Round Robin: Not Supported 00:15:12.081 Vendor Specific: Not Supported 00:15:12.081 Reset Timeout: 15000 ms 00:15:12.081 Doorbell Stride: 4 bytes 00:15:12.081 NVM Subsystem Reset: Not Supported 00:15:12.081 Command Sets Supported 00:15:12.081 NVM Command Set: Supported 00:15:12.081 Boot Partition: Not Supported 00:15:12.081 Memory Page Size Minimum: 4096 bytes 00:15:12.081 Memory Page Size Maximum: 4096 bytes 00:15:12.081 Persistent Memory Region: Not Supported 00:15:12.081 Optional Asynchronous Events Supported 00:15:12.081 Namespace Attribute Notices: Supported 00:15:12.081 Firmware Activation Notices: Not Supported 00:15:12.081 ANA Change Notices: Not Supported 00:15:12.081 PLE Aggregate Log Change Notices: Not Supported 00:15:12.081 LBA Status Info Alert Notices: Not Supported 00:15:12.081 EGE Aggregate Log Change Notices: Not Supported 00:15:12.081 Normal NVM Subsystem Shutdown event: Not Supported 00:15:12.081 Zone Descriptor Change Notices: Not Supported 00:15:12.081 Discovery Log Change Notices: Not Supported 00:15:12.081 Controller Attributes 00:15:12.081 128-bit Host Identifier: Supported 00:15:12.081 Non-Operational Permissive Mode: Not Supported 00:15:12.081 NVM Sets: Not Supported 00:15:12.081 Read Recovery Levels: Not Supported 00:15:12.081 Endurance Groups: Not Supported 00:15:12.081 Predictable Latency Mode: Not Supported 00:15:12.081 Traffic Based Keep ALive: Not Supported 00:15:12.081 Namespace Granularity: Not Supported 00:15:12.081 SQ Associations: Not Supported 00:15:12.081 UUID List: Not Supported 00:15:12.081 Multi-Domain Subsystem: Not Supported 00:15:12.081 Fixed Capacity Management: Not Supported 00:15:12.081 Variable Capacity Management: Not Supported 00:15:12.081 Delete Endurance Group: Not Supported 00:15:12.081 Delete NVM Set: Not Supported 00:15:12.081 Extended LBA Formats Supported: Not Supported 00:15:12.081 Flexible Data Placement Supported: Not Supported 00:15:12.081 00:15:12.081 Controller Memory Buffer Support 00:15:12.081 ================================ 00:15:12.081 Supported: No 00:15:12.081 00:15:12.081 Persistent Memory Region Support 00:15:12.081 ================================ 00:15:12.081 Supported: No 00:15:12.081 00:15:12.081 Admin Command Set Attributes 00:15:12.081 ============================ 00:15:12.081 Security Send/Receive: Not Supported 00:15:12.081 Format NVM: Not Supported 00:15:12.081 Firmware Activate/Download: Not Supported 00:15:12.081 Namespace Management: Not Supported 00:15:12.081 Device Self-Test: Not Supported 00:15:12.081 Directives: Not Supported 00:15:12.081 NVMe-MI: Not Supported 00:15:12.081 Virtualization Management: Not Supported 00:15:12.081 Doorbell Buffer Config: Not Supported 00:15:12.081 Get LBA Status Capability: Not Supported 00:15:12.081 Command & Feature Lockdown Capability: Not Supported 00:15:12.081 Abort Command Limit: 4 00:15:12.081 Async Event Request Limit: 4 00:15:12.081 Number of Firmware Slots: N/A 00:15:12.081 Firmware Slot 1 Read-Only: N/A 00:15:12.081 Firmware Activation Without Reset: N/A 00:15:12.081 Multiple Update Detection Support: N/A 00:15:12.081 Firmware Update Granularity: No Information Provided 00:15:12.081 Per-Namespace SMART Log: No 00:15:12.081 Asymmetric Namespace Access Log Page: Not Supported 00:15:12.081 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:12.081 Command Effects Log Page: Supported 00:15:12.081 Get Log Page Extended Data: Supported 00:15:12.081 Telemetry Log Pages: Not Supported 00:15:12.081 Persistent Event Log Pages: Not Supported 00:15:12.081 Supported Log Pages Log Page: May Support 00:15:12.081 Commands Supported & Effects Log Page: Not Supported 00:15:12.081 Feature Identifiers & Effects Log Page:May Support 00:15:12.081 NVMe-MI Commands & Effects Log Page: May Support 00:15:12.081 Data Area 4 for Telemetry Log: Not Supported 00:15:12.081 Error Log Page Entries Supported: 128 00:15:12.081 Keep Alive: Supported 00:15:12.081 Keep Alive Granularity: 10000 ms 00:15:12.081 00:15:12.081 NVM Command Set Attributes 00:15:12.081 ========================== 00:15:12.081 Submission Queue Entry Size 00:15:12.081 Max: 64 00:15:12.081 Min: 64 00:15:12.081 Completion Queue Entry Size 00:15:12.081 Max: 16 00:15:12.081 Min: 16 00:15:12.081 Number of Namespaces: 32 00:15:12.081 Compare Command: Supported 00:15:12.081 Write Uncorrectable Command: Not Supported 00:15:12.081 Dataset Management Command: Supported 00:15:12.081 Write Zeroes Command: Supported 00:15:12.081 Set Features Save Field: Not Supported 00:15:12.081 Reservations: Not Supported 00:15:12.081 Timestamp: Not Supported 00:15:12.081 Copy: Supported 00:15:12.081 Volatile Write Cache: Present 00:15:12.081 Atomic Write Unit (Normal): 1 00:15:12.081 Atomic Write Unit (PFail): 1 00:15:12.081 Atomic Compare & Write Unit: 1 00:15:12.081 Fused Compare & Write: Supported 00:15:12.081 Scatter-Gather List 00:15:12.081 SGL Command Set: Supported (Dword aligned) 00:15:12.081 SGL Keyed: Not Supported 00:15:12.081 SGL Bit Bucket Descriptor: Not Supported 00:15:12.081 SGL Metadata Pointer: Not Supported 00:15:12.081 Oversized SGL: Not Supported 00:15:12.081 SGL Metadata Address: Not Supported 00:15:12.081 SGL Offset: Not Supported 00:15:12.081 Transport SGL Data Block: Not Supported 00:15:12.081 Replay Protected Memory Block: Not Supported 00:15:12.081 00:15:12.081 Firmware Slot Information 00:15:12.081 ========================= 00:15:12.081 Active slot: 1 00:15:12.081 Slot 1 Firmware Revision: 24.09 00:15:12.081 00:15:12.081 00:15:12.081 Commands Supported and Effects 00:15:12.081 ============================== 00:15:12.081 Admin Commands 00:15:12.081 -------------- 00:15:12.081 Get Log Page (02h): Supported 00:15:12.081 Identify (06h): Supported 00:15:12.081 Abort (08h): Supported 00:15:12.081 Set Features (09h): Supported 00:15:12.081 Get Features (0Ah): Supported 00:15:12.081 Asynchronous Event Request (0Ch): Supported 00:15:12.081 Keep Alive (18h): Supported 00:15:12.081 I/O Commands 00:15:12.081 ------------ 00:15:12.081 Flush (00h): Supported LBA-Change 00:15:12.081 Write (01h): Supported LBA-Change 00:15:12.081 Read (02h): Supported 00:15:12.081 Compare (05h): Supported 00:15:12.081 Write Zeroes (08h): Supported LBA-Change 00:15:12.081 Dataset Management (09h): Supported LBA-Change 00:15:12.081 Copy (19h): Supported LBA-Change 00:15:12.081 00:15:12.081 Error Log 00:15:12.081 ========= 00:15:12.081 00:15:12.081 Arbitration 00:15:12.081 =========== 00:15:12.081 Arbitration Burst: 1 00:15:12.081 00:15:12.081 Power Management 00:15:12.081 ================ 00:15:12.081 Number of Power States: 1 00:15:12.081 Current Power State: Power State #0 00:15:12.081 Power State #0: 00:15:12.081 Max Power: 0.00 W 00:15:12.081 Non-Operational State: Operational 00:15:12.081 Entry Latency: Not Reported 00:15:12.081 Exit Latency: Not Reported 00:15:12.081 Relative Read Throughput: 0 00:15:12.081 Relative Read Latency: 0 00:15:12.081 Relative Write Throughput: 0 00:15:12.081 Relative Write Latency: 0 00:15:12.081 Idle Power: Not Reported 00:15:12.081 Active Power: Not Reported 00:15:12.081 Non-Operational Permissive Mode: Not Supported 00:15:12.081 00:15:12.082 Health Information 00:15:12.082 ================== 00:15:12.082 Critical Warnings: 00:15:12.082 Available Spare Space: OK 00:15:12.082 Temperature: OK 00:15:12.082 Device Reliability: OK 00:15:12.082 Read Only: No 00:15:12.082 Volatile Memory Backup: OK 00:15:12.082 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:12.082 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:12.082 Available Spare: 0% 00:15:12.082 Available Sp[2024-07-24 21:40:19.983166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:12.082 [2024-07-24 21:40:19.991048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:12.082 [2024-07-24 21:40:19.991076] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:12.082 [2024-07-24 21:40:19.991086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.082 [2024-07-24 21:40:19.991092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.082 [2024-07-24 21:40:19.991097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.082 [2024-07-24 21:40:19.991102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.082 [2024-07-24 21:40:19.991150] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:12.082 [2024-07-24 21:40:19.991160] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:12.082 [2024-07-24 21:40:19.992152] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.082 [2024-07-24 21:40:19.992194] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:12.082 [2024-07-24 21:40:19.992200] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:12.082 [2024-07-24 21:40:19.993157] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:12.082 [2024-07-24 21:40:19.993167] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:12.082 [2024-07-24 21:40:19.993212] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:12.082 [2024-07-24 21:40:19.996047] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:12.082 are Threshold: 0% 00:15:12.082 Life Percentage Used: 0% 00:15:12.082 Data Units Read: 0 00:15:12.082 Data Units Written: 0 00:15:12.082 Host Read Commands: 0 00:15:12.082 Host Write Commands: 0 00:15:12.082 Controller Busy Time: 0 minutes 00:15:12.082 Power Cycles: 0 00:15:12.082 Power On Hours: 0 hours 00:15:12.082 Unsafe Shutdowns: 0 00:15:12.082 Unrecoverable Media Errors: 0 00:15:12.082 Lifetime Error Log Entries: 0 00:15:12.082 Warning Temperature Time: 0 minutes 00:15:12.082 Critical Temperature Time: 0 minutes 00:15:12.082 00:15:12.082 Number of Queues 00:15:12.082 ================ 00:15:12.082 Number of I/O Submission Queues: 127 00:15:12.082 Number of I/O Completion Queues: 127 00:15:12.082 00:15:12.082 Active Namespaces 00:15:12.082 ================= 00:15:12.082 Namespace ID:1 00:15:12.082 Error Recovery Timeout: Unlimited 00:15:12.082 Command Set Identifier: NVM (00h) 00:15:12.082 Deallocate: Supported 00:15:12.082 Deallocated/Unwritten Error: Not Supported 00:15:12.082 Deallocated Read Value: Unknown 00:15:12.082 Deallocate in Write Zeroes: Not Supported 00:15:12.082 Deallocated Guard Field: 0xFFFF 00:15:12.082 Flush: Supported 00:15:12.082 Reservation: Supported 00:15:12.082 Namespace Sharing Capabilities: Multiple Controllers 00:15:12.082 Size (in LBAs): 131072 (0GiB) 00:15:12.082 Capacity (in LBAs): 131072 (0GiB) 00:15:12.082 Utilization (in LBAs): 131072 (0GiB) 00:15:12.082 NGUID: 680612B18F0A40D78D68B8D3A9C63CDB 00:15:12.082 UUID: 680612b1-8f0a-40d7-8d68-b8d3a9c63cdb 00:15:12.082 Thin Provisioning: Not Supported 00:15:12.082 Per-NS Atomic Units: Yes 00:15:12.082 Atomic Boundary Size (Normal): 0 00:15:12.082 Atomic Boundary Size (PFail): 0 00:15:12.082 Atomic Boundary Offset: 0 00:15:12.082 Maximum Single Source Range Length: 65535 00:15:12.082 Maximum Copy Length: 65535 00:15:12.082 Maximum Source Range Count: 1 00:15:12.082 NGUID/EUI64 Never Reused: No 00:15:12.082 Namespace Write Protected: No 00:15:12.082 Number of LBA Formats: 1 00:15:12.082 Current LBA Format: LBA Format #00 00:15:12.082 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:12.082 00:15:12.082 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:12.082 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.341 [2024-07-24 21:40:20.221463] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.617 Initializing NVMe Controllers 00:15:17.617 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:17.617 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:17.617 Initialization complete. Launching workers. 00:15:17.617 ======================================================== 00:15:17.617 Latency(us) 00:15:17.617 Device Information : IOPS MiB/s Average min max 00:15:17.617 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39950.42 156.06 3203.80 972.75 6637.88 00:15:17.617 ======================================================== 00:15:17.617 Total : 39950.42 156.06 3203.80 972.75 6637.88 00:15:17.617 00:15:17.617 [2024-07-24 21:40:25.325290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.617 21:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:17.617 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.617 [2024-07-24 21:40:25.549933] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.895 Initializing NVMe Controllers 00:15:22.895 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:22.895 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:22.895 Initialization complete. Launching workers. 00:15:22.895 ======================================================== 00:15:22.895 Latency(us) 00:15:22.895 Device Information : IOPS MiB/s Average min max 00:15:22.895 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39952.40 156.06 3205.99 980.00 9572.92 00:15:22.895 ======================================================== 00:15:22.895 Total : 39952.40 156.06 3205.99 980.00 9572.92 00:15:22.895 00:15:22.895 [2024-07-24 21:40:30.570365] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.895 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:22.895 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.895 [2024-07-24 21:40:30.754609] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.177 [2024-07-24 21:40:35.902138] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.177 Initializing NVMe Controllers 00:15:28.177 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.177 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.177 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:28.177 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:28.177 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:28.177 Initialization complete. Launching workers. 00:15:28.177 Starting thread on core 2 00:15:28.177 Starting thread on core 3 00:15:28.177 Starting thread on core 1 00:15:28.177 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:28.177 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.177 [2024-07-24 21:40:36.184508] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.472 [2024-07-24 21:40:39.403301] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.472 Initializing NVMe Controllers 00:15:31.472 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.472 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.472 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:31.472 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:31.472 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:31.472 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:31.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:31.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:31.472 Initialization complete. Launching workers. 00:15:31.472 Starting thread on core 1 with urgent priority queue 00:15:31.472 Starting thread on core 2 with urgent priority queue 00:15:31.472 Starting thread on core 3 with urgent priority queue 00:15:31.472 Starting thread on core 0 with urgent priority queue 00:15:31.472 SPDK bdev Controller (SPDK2 ) core 0: 6079.33 IO/s 16.45 secs/100000 ios 00:15:31.472 SPDK bdev Controller (SPDK2 ) core 1: 4897.00 IO/s 20.42 secs/100000 ios 00:15:31.472 SPDK bdev Controller (SPDK2 ) core 2: 6721.33 IO/s 14.88 secs/100000 ios 00:15:31.472 SPDK bdev Controller (SPDK2 ) core 3: 6163.67 IO/s 16.22 secs/100000 ios 00:15:31.472 ======================================================== 00:15:31.472 00:15:31.472 21:40:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:31.472 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.731 [2024-07-24 21:40:39.676515] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.731 Initializing NVMe Controllers 00:15:31.731 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.731 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.731 Namespace ID: 1 size: 0GB 00:15:31.731 Initialization complete. 00:15:31.731 INFO: using host memory buffer for IO 00:15:31.731 Hello world! 00:15:31.731 [2024-07-24 21:40:39.689615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.731 21:40:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:31.731 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.991 [2024-07-24 21:40:39.954952] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:32.930 Initializing NVMe Controllers 00:15:32.930 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:32.930 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:32.930 Initialization complete. Launching workers. 00:15:32.930 submit (in ns) avg, min, max = 6942.1, 3258.3, 3999902.6 00:15:32.930 complete (in ns) avg, min, max = 20472.0, 1808.7, 3998636.5 00:15:32.930 00:15:32.930 Submit histogram 00:15:32.930 ================ 00:15:32.930 Range in us Cumulative Count 00:15:32.930 3.256 - 3.270: 0.0248% ( 4) 00:15:32.930 3.270 - 3.283: 0.2485% ( 36) 00:15:32.930 3.283 - 3.297: 1.1244% ( 141) 00:15:32.930 3.297 - 3.311: 4.0688% ( 474) 00:15:32.930 3.311 - 3.325: 8.8645% ( 772) 00:15:32.930 3.325 - 3.339: 14.4304% ( 896) 00:15:32.930 3.339 - 3.353: 20.6734% ( 1005) 00:15:32.930 3.353 - 3.367: 26.5002% ( 938) 00:15:32.930 3.367 - 3.381: 31.7306% ( 842) 00:15:32.930 3.381 - 3.395: 37.0046% ( 849) 00:15:32.930 3.395 - 3.409: 42.1605% ( 830) 00:15:32.930 3.409 - 3.423: 46.0865% ( 632) 00:15:32.930 3.423 - 3.437: 49.7391% ( 588) 00:15:32.930 3.437 - 3.450: 54.5471% ( 774) 00:15:32.930 3.450 - 3.464: 61.6847% ( 1149) 00:15:32.930 3.464 - 3.478: 67.2506% ( 896) 00:15:32.930 3.478 - 3.492: 72.2202% ( 800) 00:15:32.930 3.492 - 3.506: 77.3388% ( 824) 00:15:32.930 3.506 - 3.520: 81.1467% ( 613) 00:15:32.930 3.520 - 3.534: 83.6439% ( 402) 00:15:32.930 3.534 - 3.548: 85.3895% ( 281) 00:15:32.930 3.548 - 3.562: 86.2405% ( 137) 00:15:32.930 3.562 - 3.590: 87.1910% ( 153) 00:15:32.930 3.590 - 3.617: 88.4333% ( 200) 00:15:32.930 3.617 - 3.645: 90.0857% ( 266) 00:15:32.930 3.645 - 3.673: 91.8934% ( 291) 00:15:32.930 3.673 - 3.701: 93.4899% ( 257) 00:15:32.930 3.701 - 3.729: 95.2541% ( 284) 00:15:32.930 3.729 - 3.757: 96.8878% ( 263) 00:15:32.930 3.757 - 3.784: 97.9066% ( 164) 00:15:32.930 3.784 - 3.812: 98.7079% ( 129) 00:15:32.930 3.812 - 3.840: 99.0185% ( 50) 00:15:32.930 3.840 - 3.868: 99.2608% ( 39) 00:15:32.930 3.868 - 3.896: 99.3726% ( 18) 00:15:32.930 3.896 - 3.923: 99.4099% ( 6) 00:15:32.930 3.923 - 3.951: 99.4285% ( 3) 00:15:32.930 3.951 - 3.979: 99.4533% ( 4) 00:15:32.930 3.979 - 4.007: 99.4720% ( 3) 00:15:32.930 4.035 - 4.063: 99.4844% ( 2) 00:15:32.930 4.063 - 4.090: 99.4906% ( 1) 00:15:32.930 4.118 - 4.146: 99.4968% ( 1) 00:15:32.930 4.146 - 4.174: 99.5030% ( 1) 00:15:32.930 4.174 - 4.202: 99.5093% ( 1) 00:15:32.930 4.341 - 4.369: 99.5217% ( 2) 00:15:32.930 4.369 - 4.397: 99.5279% ( 1) 00:15:32.930 4.591 - 4.619: 99.5403% ( 2) 00:15:32.930 4.703 - 4.730: 99.5465% ( 1) 00:15:32.930 4.758 - 4.786: 99.5527% ( 1) 00:15:32.930 4.842 - 4.870: 99.5590% ( 1) 00:15:32.930 4.981 - 5.009: 99.5652% ( 1) 00:15:32.930 5.203 - 5.231: 99.5714% ( 1) 00:15:32.930 5.231 - 5.259: 99.5776% ( 1) 00:15:32.930 5.426 - 5.454: 99.5838% ( 1) 00:15:32.930 5.510 - 5.537: 99.5962% ( 2) 00:15:32.930 5.537 - 5.565: 99.6024% ( 1) 00:15:32.930 5.565 - 5.593: 99.6086% ( 1) 00:15:32.930 5.621 - 5.649: 99.6149% ( 1) 00:15:32.930 5.649 - 5.677: 99.6211% ( 1) 00:15:32.930 5.704 - 5.732: 99.6273% ( 1) 00:15:32.930 5.732 - 5.760: 99.6397% ( 2) 00:15:32.930 5.760 - 5.788: 99.6459% ( 1) 00:15:32.930 5.843 - 5.871: 99.6521% ( 1) 00:15:32.930 5.871 - 5.899: 99.6583% ( 1) 00:15:32.930 5.955 - 5.983: 99.6646% ( 1) 00:15:32.930 6.066 - 6.094: 99.6770% ( 2) 00:15:32.930 6.122 - 6.150: 99.6894% ( 2) 00:15:32.930 6.150 - 6.177: 99.6956% ( 1) 00:15:32.930 6.177 - 6.205: 99.7080% ( 2) 00:15:32.930 6.233 - 6.261: 99.7143% ( 1) 00:15:32.930 6.483 - 6.511: 99.7267% ( 2) 00:15:32.930 6.511 - 6.539: 99.7329% ( 1) 00:15:32.930 6.539 - 6.567: 99.7391% ( 1) 00:15:32.930 6.595 - 6.623: 99.7515% ( 2) 00:15:32.930 6.706 - 6.734: 99.7577% ( 1) 00:15:32.930 6.817 - 6.845: 99.7639% ( 1) 00:15:32.930 6.873 - 6.901: 99.7702% ( 1) 00:15:32.930 6.929 - 6.957: 99.7826% ( 2) 00:15:32.930 6.957 - 6.984: 99.7888% ( 1) 00:15:32.930 7.068 - 7.096: 99.7950% ( 1) 00:15:33.190 7.346 - 7.402: 99.8012% ( 1) 00:15:33.190 7.513 - 7.569: 99.8074% ( 1) 00:15:33.190 7.680 - 7.736: 99.8136% ( 1) 00:15:33.190 7.736 - 7.791: 99.8199% ( 1) 00:15:33.190 7.791 - 7.847: 99.8323% ( 2) 00:15:33.190 8.348 - 8.403: 99.8447% ( 2) 00:15:33.190 9.016 - 9.071: 99.8509% ( 1) 00:15:33.190 9.183 - 9.238: 99.8571% ( 1) 00:15:33.190 9.906 - 9.962: 99.8633% ( 1) 00:15:33.190 10.129 - 10.184: 99.8695% ( 1) 00:15:33.190 10.240 - 10.296: 99.8758% ( 1) 00:15:33.190 10.407 - 10.463: 99.8820% ( 1) 00:15:33.190 12.967 - 13.023: 99.8882% ( 1) 00:15:33.190 15.694 - 15.805: 99.8944% ( 1) 00:15:33.190 17.252 - 17.363: 99.9006% ( 1) 00:15:33.190 17.586 - 17.697: 99.9068% ( 1) 00:15:33.190 36.508 - 36.730: 99.9130% ( 1) 00:15:33.190 3989.148 - 4017.642: 100.0000% ( 14) 00:15:33.190 00:15:33.190 Complete histogram 00:15:33.190 ================== 00:15:33.190 Range in us Cumulative Count 00:15:33.190 1.809 - 1.823: 4.0067% ( 645) 00:15:33.190 1.823 - 1.837: 37.1910% ( 5342) 00:15:33.190 1.837 - 1.850: 58.8148% ( 3481) 00:15:33.190 1.850 - 1.864: 69.3316% ( 1693) 00:15:33.190 1.864 - 1.878: 88.3402% ( 3060) 00:15:33.190 1.878 - 1.892: 95.2354% ( 1110) 00:15:33.190 1.892 - 1.906: 97.2170% ( 319) 00:15:33.190 1.906 - 1.920: 98.3166% ( 177) 00:15:33.190 1.920 - 1.934: 98.6085% ( 47) 00:15:33.190 1.934 - 1.948: 98.7328% ( 20) 00:15:33.190 1.948 - 1.962: 98.8197% ( 14) 00:15:33.190 1.962 - 1.976: 98.8508% ( 5) 00:15:33.190 1.976 - 1.990: 98.8818% ( 5) 00:15:33.190 1.990 - 2.003: 98.9005% ( 3) 00:15:33.190 2.003 - 2.017: 98.9440% ( 7) 00:15:33.190 2.017 - 2.031: 98.9937% ( 8) 00:15:33.190 2.031 - 2.045: 98.9999% ( 1) 00:15:33.190 2.045 - 2.059: 99.0123% ( 2) 00:15:33.190 2.059 - 2.073: 99.0247% ( 2) 00:15:33.190 2.073 - 2.087: 99.0371% ( 2) 00:15:33.190 2.087 - 2.101: 99.0806% ( 7) 00:15:33.190 2.101 - 2.115: 99.1303% ( 8) 00:15:33.190 2.115 - 2.129: 99.1428% ( 2) 00:15:33.190 2.129 - 2.143: 99.1552% ( 2) 00:15:33.190 2.143 - 2.157: 99.1614% ( 1) 00:15:33.190 2.157 - 2.170: 99.1862% ( 4) 00:15:33.190 2.184 - 2.198: 99.1987% ( 2) 00:15:33.190 2.212 - 2.226: 99.2049% ( 1) 00:15:33.190 2.226 - 2.240: 99.2111% ( 1) 00:15:33.190 2.282 - 2.296: 99.2173% ( 1) 00:15:33.190 2.296 - 2.310: 99.2235% ( 1) 00:15:33.190 2.337 - 2.351: 99.2297% ( 1) 00:15:33.190 2.351 - 2.365: 99.2359% ( 1) 00:15:33.190 2.449 - 2.463: 99.2421% ( 1) 00:15:33.190 2.477 - 2.490: 99.2484% ( 1) 00:15:33.190 2.616 - 2.630: 99.2546% ( 1) 00:15:33.190 2.671 - 2.685: 99.2608% ( 1) 00:15:33.190 2.741 - 2.755: 99.2670% ( 1) 00:15:33.190 2.936 - 2.950: 99.2732% ( 1) 00:15:33.190 3.325 - 3.339: 99.2794% ( 1) 00:15:33.190 3.757 - 3.784: 99.2918% ( 2) 00:15:33.190 4.007 - 4.035: 99.2980% ( 1) 00:15:33.190 4.035 - 4.063: 99.3043% ( 1) 00:15:33.190 4.118 - 4.146: 99.3167% ( 2) 00:15:33.190 4.146 - 4.174: 99.3229% ( 1) 00:15:33.190 4.174 - 4.202: 99.3291% ( 1) 00:15:33.190 4.619 - 4.647: 99.3353% ( 1) 00:15:33.190 4.703 - 4.730: 99.3415% ( 1) 00:15:33.190 4.730 - 4.758: 99.3477% ( 1) 00:15:33.190 4.786 - 4.814: 99.3540% ( 1) 00:15:33.190 4.842 - 4.870: 99.3602% ( 1) 00:15:33.190 4.870 - 4.897: 99.3664% ( 1) 00:15:33.190 4.897 - 4.925: 99.3726% ( 1) 00:15:33.190 4.981 - 5.009: 99.3788% ( 1) 00:15:33.190 5.064 - 5.092: 99.3974% ( 3) 00:15:33.190 5.092 - 5.120: 99.4037% ( 1) 00:15:33.190 5.120 - 5.148: 99.4099% ( 1) 00:15:33.190 5.203 - 5.231: 99.4161% ( 1) 00:15:33.190 5.315 - 5.343: 99.4223% ( 1) 00:15:33.190 5.370 - 5.398: 99.4285% ( 1) 00:15:33.190 5.398 - 5.426: 99.4347% ( 1) 00:15:33.190 5.426 - 5.454: 99.4409% ( 1) 00:15:33.190 5.482 - 5.510: 99.4471% ( 1) 00:15:33.190 5.704 - 5.732: 99.4596% ( 2) 00:15:33.190 5.871 - 5.899: 99.4658% ( 1) 00:15:33.190 6.010 - 6.038: 99.4720% ( 1) 00:15:33.190 6.094 - 6.122: 99.4782% ( 1) 00:15:33.190 6.539 - 6.5[2024-07-24 21:40:41.051176] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.190 67: 99.4844% ( 1) 00:15:33.190 6.595 - 6.623: 99.4906% ( 1) 00:15:33.190 6.873 - 6.901: 99.4968% ( 1) 00:15:33.190 7.847 - 7.903: 99.5030% ( 1) 00:15:33.190 8.292 - 8.348: 99.5093% ( 1) 00:15:33.190 8.682 - 8.737: 99.5155% ( 1) 00:15:33.190 12.911 - 12.967: 99.5217% ( 1) 00:15:33.190 15.583 - 15.694: 99.5279% ( 1) 00:15:33.190 18.477 - 18.588: 99.5341% ( 1) 00:15:33.190 3989.148 - 4017.642: 100.0000% ( 75) 00:15:33.190 00:15:33.190 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:33.190 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:33.190 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:33.190 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:33.190 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:33.190 [ 00:15:33.190 { 00:15:33.190 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:33.190 "subtype": "Discovery", 00:15:33.190 "listen_addresses": [], 00:15:33.190 "allow_any_host": true, 00:15:33.190 "hosts": [] 00:15:33.190 }, 00:15:33.190 { 00:15:33.190 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:33.190 "subtype": "NVMe", 00:15:33.190 "listen_addresses": [ 00:15:33.190 { 00:15:33.190 "trtype": "VFIOUSER", 00:15:33.190 "adrfam": "IPv4", 00:15:33.190 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:33.190 "trsvcid": "0" 00:15:33.190 } 00:15:33.190 ], 00:15:33.190 "allow_any_host": true, 00:15:33.190 "hosts": [], 00:15:33.190 "serial_number": "SPDK1", 00:15:33.190 "model_number": "SPDK bdev Controller", 00:15:33.190 "max_namespaces": 32, 00:15:33.190 "min_cntlid": 1, 00:15:33.191 "max_cntlid": 65519, 00:15:33.191 "namespaces": [ 00:15:33.191 { 00:15:33.191 "nsid": 1, 00:15:33.191 "bdev_name": "Malloc1", 00:15:33.191 "name": "Malloc1", 00:15:33.191 "nguid": "F3F855865B5D4996A1961A36E9F4CF52", 00:15:33.191 "uuid": "f3f85586-5b5d-4996-a196-1a36e9f4cf52" 00:15:33.191 }, 00:15:33.191 { 00:15:33.191 "nsid": 2, 00:15:33.191 "bdev_name": "Malloc3", 00:15:33.191 "name": "Malloc3", 00:15:33.191 "nguid": "E8E996973A9D46E0BBABC34D4FDA6C4C", 00:15:33.191 "uuid": "e8e99697-3a9d-46e0-bbab-c34d4fda6c4c" 00:15:33.191 } 00:15:33.191 ] 00:15:33.191 }, 00:15:33.191 { 00:15:33.191 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:33.191 "subtype": "NVMe", 00:15:33.191 "listen_addresses": [ 00:15:33.191 { 00:15:33.191 "trtype": "VFIOUSER", 00:15:33.191 "adrfam": "IPv4", 00:15:33.191 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:33.191 "trsvcid": "0" 00:15:33.191 } 00:15:33.191 ], 00:15:33.191 "allow_any_host": true, 00:15:33.191 "hosts": [], 00:15:33.191 "serial_number": "SPDK2", 00:15:33.191 "model_number": "SPDK bdev Controller", 00:15:33.191 "max_namespaces": 32, 00:15:33.191 "min_cntlid": 1, 00:15:33.191 "max_cntlid": 65519, 00:15:33.191 "namespaces": [ 00:15:33.191 { 00:15:33.191 "nsid": 1, 00:15:33.191 "bdev_name": "Malloc2", 00:15:33.191 "name": "Malloc2", 00:15:33.191 "nguid": "680612B18F0A40D78D68B8D3A9C63CDB", 00:15:33.191 "uuid": "680612b1-8f0a-40d7-8d68-b8d3a9c63cdb" 00:15:33.191 } 00:15:33.191 ] 00:15:33.191 } 00:15:33.191 ] 00:15:33.191 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:33.191 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3042172 00:15:33.191 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:33.191 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:33.191 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:15:33.191 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.191 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.191 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:15:33.191 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:33.191 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:33.450 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.450 [2024-07-24 21:40:41.421498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.450 Malloc4 00:15:33.450 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:33.709 [2024-07-24 21:40:41.640125] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.709 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:33.709 Asynchronous Event Request test 00:15:33.709 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.709 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.709 Registering asynchronous event callbacks... 00:15:33.709 Starting namespace attribute notice tests for all controllers... 00:15:33.709 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:33.709 aer_cb - Changed Namespace 00:15:33.709 Cleaning up... 00:15:33.969 [ 00:15:33.969 { 00:15:33.969 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:33.969 "subtype": "Discovery", 00:15:33.969 "listen_addresses": [], 00:15:33.969 "allow_any_host": true, 00:15:33.969 "hosts": [] 00:15:33.969 }, 00:15:33.969 { 00:15:33.969 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:33.969 "subtype": "NVMe", 00:15:33.969 "listen_addresses": [ 00:15:33.969 { 00:15:33.969 "trtype": "VFIOUSER", 00:15:33.969 "adrfam": "IPv4", 00:15:33.969 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:33.969 "trsvcid": "0" 00:15:33.969 } 00:15:33.969 ], 00:15:33.969 "allow_any_host": true, 00:15:33.969 "hosts": [], 00:15:33.969 "serial_number": "SPDK1", 00:15:33.969 "model_number": "SPDK bdev Controller", 00:15:33.969 "max_namespaces": 32, 00:15:33.969 "min_cntlid": 1, 00:15:33.969 "max_cntlid": 65519, 00:15:33.969 "namespaces": [ 00:15:33.969 { 00:15:33.969 "nsid": 1, 00:15:33.969 "bdev_name": "Malloc1", 00:15:33.969 "name": "Malloc1", 00:15:33.969 "nguid": "F3F855865B5D4996A1961A36E9F4CF52", 00:15:33.969 "uuid": "f3f85586-5b5d-4996-a196-1a36e9f4cf52" 00:15:33.969 }, 00:15:33.969 { 00:15:33.969 "nsid": 2, 00:15:33.969 "bdev_name": "Malloc3", 00:15:33.969 "name": "Malloc3", 00:15:33.969 "nguid": "E8E996973A9D46E0BBABC34D4FDA6C4C", 00:15:33.969 "uuid": "e8e99697-3a9d-46e0-bbab-c34d4fda6c4c" 00:15:33.969 } 00:15:33.969 ] 00:15:33.969 }, 00:15:33.969 { 00:15:33.969 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:33.969 "subtype": "NVMe", 00:15:33.969 "listen_addresses": [ 00:15:33.969 { 00:15:33.969 "trtype": "VFIOUSER", 00:15:33.969 "adrfam": "IPv4", 00:15:33.969 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:33.969 "trsvcid": "0" 00:15:33.969 } 00:15:33.969 ], 00:15:33.969 "allow_any_host": true, 00:15:33.969 "hosts": [], 00:15:33.969 "serial_number": "SPDK2", 00:15:33.969 "model_number": "SPDK bdev Controller", 00:15:33.969 "max_namespaces": 32, 00:15:33.969 "min_cntlid": 1, 00:15:33.969 "max_cntlid": 65519, 00:15:33.969 "namespaces": [ 00:15:33.969 { 00:15:33.969 "nsid": 1, 00:15:33.969 "bdev_name": "Malloc2", 00:15:33.969 "name": "Malloc2", 00:15:33.970 "nguid": "680612B18F0A40D78D68B8D3A9C63CDB", 00:15:33.970 "uuid": "680612b1-8f0a-40d7-8d68-b8d3a9c63cdb" 00:15:33.970 }, 00:15:33.970 { 00:15:33.970 "nsid": 2, 00:15:33.970 "bdev_name": "Malloc4", 00:15:33.970 "name": "Malloc4", 00:15:33.970 "nguid": "2131D21E80E24DE78CB6A9F59B26DD82", 00:15:33.970 "uuid": "2131d21e-80e2-4de7-8cb6-a9f59b26dd82" 00:15:33.970 } 00:15:33.970 ] 00:15:33.970 } 00:15:33.970 ] 00:15:33.970 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3042172 00:15:33.970 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:33.970 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3034335 00:15:33.970 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3034335 ']' 00:15:33.970 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3034335 00:15:33.970 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:33.970 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:33.970 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3034335 00:15:33.970 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:33.970 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:33.970 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3034335' 00:15:33.970 killing process with pid 3034335 00:15:33.970 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3034335 00:15:33.970 21:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3034335 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3042404 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3042404' 00:15:34.230 Process pid: 3042404 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3042404 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3042404 ']' 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:34.230 21:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:34.230 [2024-07-24 21:40:42.208654] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:34.230 [2024-07-24 21:40:42.209505] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:15:34.230 [2024-07-24 21:40:42.209546] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.230 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.230 [2024-07-24 21:40:42.267546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.490 [2024-07-24 21:40:42.347318] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.490 [2024-07-24 21:40:42.347357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.490 [2024-07-24 21:40:42.347364] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.490 [2024-07-24 21:40:42.347370] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.490 [2024-07-24 21:40:42.347375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.490 [2024-07-24 21:40:42.347640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.490 [2024-07-24 21:40:42.347739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.490 [2024-07-24 21:40:42.347824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.490 [2024-07-24 21:40:42.347825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.490 [2024-07-24 21:40:42.426269] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:34.490 [2024-07-24 21:40:42.426330] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:34.490 [2024-07-24 21:40:42.426475] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:34.490 [2024-07-24 21:40:42.426793] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:34.490 [2024-07-24 21:40:42.426973] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:35.059 21:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:35.059 21:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:35.059 21:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:35.995 21:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:36.255 21:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:36.255 21:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:36.255 21:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:36.255 21:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:36.255 21:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:36.255 Malloc1 00:15:36.514 21:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:36.514 21:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:36.773 21:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:37.033 21:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:37.033 21:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:37.033 21:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:37.033 Malloc2 00:15:37.033 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:37.293 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:37.553 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:37.553 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:37.553 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3042404 00:15:37.553 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3042404 ']' 00:15:37.553 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3042404 00:15:37.553 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:37.553 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:37.553 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3042404 00:15:37.812 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:37.812 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:37.812 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3042404' 00:15:37.812 killing process with pid 3042404 00:15:37.812 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3042404 00:15:37.812 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3042404 00:15:37.812 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:37.812 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:37.812 00:15:37.812 real 0m52.222s 00:15:37.812 user 3m26.789s 00:15:37.812 sys 0m3.611s 00:15:38.073 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:38.073 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:38.073 ************************************ 00:15:38.073 END TEST nvmf_vfio_user 00:15:38.073 ************************************ 00:15:38.073 21:40:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:38.073 21:40:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:38.073 21:40:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.073 21:40:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:38.073 ************************************ 00:15:38.073 START TEST nvmf_vfio_user_nvme_compliance 00:15:38.073 ************************************ 00:15:38.073 21:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:38.073 * Looking for test storage... 00:15:38.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.073 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3043093 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3043093' 00:15:38.074 Process pid: 3043093 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3043093 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3043093 ']' 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.074 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:38.074 [2024-07-24 21:40:46.162301] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:15:38.074 [2024-07-24 21:40:46.162350] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.074 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.333 [2024-07-24 21:40:46.217797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:38.333 [2024-07-24 21:40:46.298395] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.333 [2024-07-24 21:40:46.298430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.333 [2024-07-24 21:40:46.298438] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.333 [2024-07-24 21:40:46.298444] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.333 [2024-07-24 21:40:46.298449] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.333 [2024-07-24 21:40:46.298492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.333 [2024-07-24 21:40:46.298588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.333 [2024-07-24 21:40:46.298589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.901 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.901 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:15:38.901 21:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:39.870 21:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:39.870 21:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:39.870 21:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:39.870 21:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.870 21:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.128 21:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.129 21:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:40.129 21:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:40.129 21:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.129 21:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.129 malloc0 00:15:40.129 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.129 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:40.129 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.129 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.129 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.129 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:40.129 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.129 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.129 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.129 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:40.129 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.129 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.129 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.129 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:40.129 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.129 00:15:40.129 00:15:40.129 CUnit - A unit testing framework for C - Version 2.1-3 00:15:40.129 http://cunit.sourceforge.net/ 00:15:40.129 00:15:40.129 00:15:40.129 Suite: nvme_compliance 00:15:40.129 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 21:40:48.204470] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.129 [2024-07-24 21:40:48.205816] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:40.129 [2024-07-24 21:40:48.205830] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:40.129 [2024-07-24 21:40:48.205837] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:40.129 [2024-07-24 21:40:48.207490] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.129 passed 00:15:40.387 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 21:40:48.288041] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.387 [2024-07-24 21:40:48.291063] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.387 passed 00:15:40.387 Test: admin_identify_ns ...[2024-07-24 21:40:48.364476] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.387 [2024-07-24 21:40:48.428058] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:40.387 [2024-07-24 21:40:48.436055] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:40.387 [2024-07-24 21:40:48.457108] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.387 passed 00:15:40.646 Test: admin_get_features_mandatory_features ...[2024-07-24 21:40:48.532346] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.646 [2024-07-24 21:40:48.536373] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.646 passed 00:15:40.646 Test: admin_get_features_optional_features ...[2024-07-24 21:40:48.612883] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.646 [2024-07-24 21:40:48.615899] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.646 passed 00:15:40.646 Test: admin_set_features_number_of_queues ...[2024-07-24 21:40:48.693855] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.905 [2024-07-24 21:40:48.799138] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.905 passed 00:15:40.905 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 21:40:48.871436] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.905 [2024-07-24 21:40:48.874460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.905 passed 00:15:40.905 Test: admin_get_log_page_with_lpo ...[2024-07-24 21:40:48.954400] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.164 [2024-07-24 21:40:49.023054] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:41.164 [2024-07-24 21:40:49.036116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.164 passed 00:15:41.164 Test: fabric_property_get ...[2024-07-24 21:40:49.110246] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.164 [2024-07-24 21:40:49.111487] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:41.164 [2024-07-24 21:40:49.113270] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.164 passed 00:15:41.164 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 21:40:49.192807] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.164 [2024-07-24 21:40:49.194041] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:41.164 [2024-07-24 21:40:49.195825] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.164 passed 00:15:41.164 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 21:40:49.272525] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.422 [2024-07-24 21:40:49.360053] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.422 [2024-07-24 21:40:49.376057] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.422 [2024-07-24 21:40:49.381132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.422 passed 00:15:41.422 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 21:40:49.456308] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.422 [2024-07-24 21:40:49.457545] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:41.422 [2024-07-24 21:40:49.459327] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.422 passed 00:15:41.422 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 21:40:49.537241] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.680 [2024-07-24 21:40:49.614051] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:41.680 [2024-07-24 21:40:49.638058] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.680 [2024-07-24 21:40:49.643123] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.680 passed 00:15:41.680 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 21:40:49.721271] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.680 [2024-07-24 21:40:49.722504] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:41.681 [2024-07-24 21:40:49.722525] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:41.681 [2024-07-24 21:40:49.724290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.681 passed 00:15:41.940 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 21:40:49.801504] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.940 [2024-07-24 21:40:49.893052] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:41.940 [2024-07-24 21:40:49.901054] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:41.940 [2024-07-24 21:40:49.909054] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:41.940 [2024-07-24 21:40:49.917052] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:41.940 [2024-07-24 21:40:49.946129] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.940 passed 00:15:41.940 Test: admin_create_io_sq_verify_pc ...[2024-07-24 21:40:50.022297] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.940 [2024-07-24 21:40:50.039062] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:42.199 [2024-07-24 21:40:50.056502] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.199 passed 00:15:42.199 Test: admin_create_io_qp_max_qps ...[2024-07-24 21:40:50.137120] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.578 [2024-07-24 21:40:51.275052] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:43.578 [2024-07-24 21:40:51.659609] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.578 passed 00:15:43.838 Test: admin_create_io_sq_shared_cq ...[2024-07-24 21:40:51.737549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.838 [2024-07-24 21:40:51.870054] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:43.838 [2024-07-24 21:40:51.907115] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.838 passed 00:15:43.838 00:15:43.838 Run Summary: Type Total Ran Passed Failed Inactive 00:15:43.838 suites 1 1 n/a 0 0 00:15:43.838 tests 18 18 18 0 0 00:15:43.838 asserts 360 360 360 0 n/a 00:15:43.838 00:15:43.838 Elapsed time = 1.531 seconds 00:15:43.838 21:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3043093 00:15:43.838 21:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3043093 ']' 00:15:43.838 21:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3043093 00:15:43.838 21:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:15:44.097 21:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.097 21:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3043093 00:15:44.097 21:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:44.097 21:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:44.097 21:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3043093' 00:15:44.097 killing process with pid 3043093 00:15:44.097 21:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3043093 00:15:44.097 21:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3043093 00:15:44.097 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:44.097 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:44.097 00:15:44.097 real 0m6.206s 00:15:44.097 user 0m17.693s 00:15:44.097 sys 0m0.482s 00:15:44.097 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:44.097 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:44.097 ************************************ 00:15:44.097 END TEST nvmf_vfio_user_nvme_compliance 00:15:44.097 ************************************ 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:44.356 ************************************ 00:15:44.356 START TEST nvmf_vfio_user_fuzz 00:15:44.356 ************************************ 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:44.356 * Looking for test storage... 00:15:44.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.356 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3044150 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3044150' 00:15:44.357 Process pid: 3044150 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3044150 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3044150 ']' 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.357 21:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:45.295 21:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.296 21:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:15:45.296 21:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.269 malloc0 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:46.269 21:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:18.398 Fuzzing completed. Shutting down the fuzz application 00:16:18.398 00:16:18.398 Dumping successful admin opcodes: 00:16:18.398 8, 9, 10, 24, 00:16:18.398 Dumping successful io opcodes: 00:16:18.398 0, 00:16:18.398 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1128295, total successful commands: 4442, random_seed: 3504989696 00:16:18.398 NS: 0x200003a1ef00 admin qp, Total commands completed: 281283, total successful commands: 2264, random_seed: 3259604032 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3044150 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3044150 ']' 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3044150 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3044150 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3044150' 00:16:18.398 killing process with pid 3044150 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3044150 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3044150 00:16:18.398 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:18.398 00:16:18.398 real 0m32.779s 00:16:18.398 user 0m35.730s 00:16:18.398 sys 0m26.111s 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.398 ************************************ 00:16:18.398 END TEST nvmf_vfio_user_fuzz 00:16:18.398 ************************************ 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:18.398 ************************************ 00:16:18.398 START TEST nvmf_auth_target 00:16:18.398 ************************************ 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:18.398 * Looking for test storage... 00:16:18.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.398 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:18.399 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:22.596 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:22.596 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:22.596 Found net devices under 0000:86:00.0: cvl_0_0 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.596 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:22.597 Found net devices under 0000:86:00.1: cvl_0_1 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:22.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:16:22.597 00:16:22.597 --- 10.0.0.2 ping statistics --- 00:16:22.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.597 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.374 ms 00:16:22.597 00:16:22.597 --- 10.0.0.1 ping statistics --- 00:16:22.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.597 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3052462 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3052462 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3052462 ']' 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:22.597 21:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3052708 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0b58bf8494f577b47aa0ec8fb5f2d44423e758ef85cd879a 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jay 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0b58bf8494f577b47aa0ec8fb5f2d44423e758ef85cd879a 0 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0b58bf8494f577b47aa0ec8fb5f2d44423e758ef85cd879a 0 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0b58bf8494f577b47aa0ec8fb5f2d44423e758ef85cd879a 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jay 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jay 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.jay 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.537 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a7c62cc82511877ef0d24ff2f1e05602c7a41872b9fc02a7b9cb608928d36d99 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.GIn 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a7c62cc82511877ef0d24ff2f1e05602c7a41872b9fc02a7b9cb608928d36d99 3 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a7c62cc82511877ef0d24ff2f1e05602c7a41872b9fc02a7b9cb608928d36d99 3 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a7c62cc82511877ef0d24ff2f1e05602c7a41872b9fc02a7b9cb608928d36d99 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.GIn 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.GIn 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.GIn 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4dbbe62cec027dd73ccffae920f8713f 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GGP 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4dbbe62cec027dd73ccffae920f8713f 1 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4dbbe62cec027dd73ccffae920f8713f 1 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4dbbe62cec027dd73ccffae920f8713f 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GGP 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GGP 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.GGP 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2c129c516a4d397e01e04b18e1692b44974d635119aabaf5 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FeW 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2c129c516a4d397e01e04b18e1692b44974d635119aabaf5 2 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2c129c516a4d397e01e04b18e1692b44974d635119aabaf5 2 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2c129c516a4d397e01e04b18e1692b44974d635119aabaf5 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FeW 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FeW 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.FeW 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0eed94b0702acfac6b1d85db0d71bce79b0face65eabbc12 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.tBG 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0eed94b0702acfac6b1d85db0d71bce79b0face65eabbc12 2 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0eed94b0702acfac6b1d85db0d71bce79b0face65eabbc12 2 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0eed94b0702acfac6b1d85db0d71bce79b0face65eabbc12 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.tBG 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.tBG 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.tBG 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c88c849f9b9fb3afc4ef4d7b20e220b1 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.uIk 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c88c849f9b9fb3afc4ef4d7b20e220b1 1 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c88c849f9b9fb3afc4ef4d7b20e220b1 1 00:16:23.538 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c88c849f9b9fb3afc4ef4d7b20e220b1 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.uIk 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.uIk 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.uIk 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5b591b7ad2318419f4864dc970af1795c89a1c5c406421a9d0155b71a0a78e2b 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1No 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5b591b7ad2318419f4864dc970af1795c89a1c5c406421a9d0155b71a0a78e2b 3 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5b591b7ad2318419f4864dc970af1795c89a1c5c406421a9d0155b71a0a78e2b 3 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5b591b7ad2318419f4864dc970af1795c89a1c5c406421a9d0155b71a0a78e2b 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1No 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1No 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.1No 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3052462 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3052462 ']' 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.798 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.059 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.059 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:24.059 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3052708 /var/tmp/host.sock 00:16:24.059 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3052708 ']' 00:16:24.059 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:24.059 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.059 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:24.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:24.059 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.059 21:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.059 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.059 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:24.059 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:24.059 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.059 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.059 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.059 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:24.059 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jay 00:16:24.059 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.059 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.059 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.059 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.jay 00:16:24.059 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.jay 00:16:24.319 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.GIn ]] 00:16:24.319 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GIn 00:16:24.319 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.319 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.319 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.319 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GIn 00:16:24.319 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GIn 00:16:24.579 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:24.579 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.GGP 00:16:24.579 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.579 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.580 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.580 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.GGP 00:16:24.580 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.GGP 00:16:24.580 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.FeW ]] 00:16:24.580 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FeW 00:16:24.580 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.580 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.840 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.840 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FeW 00:16:24.840 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FeW 00:16:24.840 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:24.840 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.tBG 00:16:24.840 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.840 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.840 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.840 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.tBG 00:16:24.840 21:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.tBG 00:16:25.100 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.uIk ]] 00:16:25.100 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uIk 00:16:25.100 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.100 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.100 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.100 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uIk 00:16:25.100 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uIk 00:16:25.360 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:25.360 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1No 00:16:25.360 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.360 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.360 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.360 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.1No 00:16:25.360 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.1No 00:16:25.360 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:25.360 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:25.360 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.360 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.360 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.360 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.620 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:25.620 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.620 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:25.620 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:25.620 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:25.620 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.620 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.620 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.620 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.620 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.620 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.620 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.880 00:16:25.880 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.880 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.880 21:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.141 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.141 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.141 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.141 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.141 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.141 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.141 { 00:16:26.141 "cntlid": 1, 00:16:26.141 "qid": 0, 00:16:26.141 "state": "enabled", 00:16:26.141 "thread": "nvmf_tgt_poll_group_000", 00:16:26.141 "listen_address": { 00:16:26.141 "trtype": "TCP", 00:16:26.141 "adrfam": "IPv4", 00:16:26.141 "traddr": "10.0.0.2", 00:16:26.141 "trsvcid": "4420" 00:16:26.141 }, 00:16:26.141 "peer_address": { 00:16:26.141 "trtype": "TCP", 00:16:26.141 "adrfam": "IPv4", 00:16:26.141 "traddr": "10.0.0.1", 00:16:26.141 "trsvcid": "43218" 00:16:26.141 }, 00:16:26.141 "auth": { 00:16:26.141 "state": "completed", 00:16:26.141 "digest": "sha256", 00:16:26.141 "dhgroup": "null" 00:16:26.141 } 00:16:26.141 } 00:16:26.141 ]' 00:16:26.141 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.141 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.141 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.141 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:26.141 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.141 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.141 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.141 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.402 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:16:26.972 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.972 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.972 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.972 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.972 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.972 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.972 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:26.972 21:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:26.972 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:26.972 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.972 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:26.972 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:26.972 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:26.972 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.972 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.972 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.972 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.972 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.972 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.972 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.232 00:16:27.232 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.232 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.232 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.493 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.493 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.493 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.493 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.493 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.493 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.493 { 00:16:27.493 "cntlid": 3, 00:16:27.493 "qid": 0, 00:16:27.493 "state": "enabled", 00:16:27.493 "thread": "nvmf_tgt_poll_group_000", 00:16:27.493 "listen_address": { 00:16:27.493 "trtype": "TCP", 00:16:27.493 "adrfam": "IPv4", 00:16:27.493 "traddr": "10.0.0.2", 00:16:27.493 "trsvcid": "4420" 00:16:27.493 }, 00:16:27.493 "peer_address": { 00:16:27.493 "trtype": "TCP", 00:16:27.493 "adrfam": "IPv4", 00:16:27.493 "traddr": "10.0.0.1", 00:16:27.493 "trsvcid": "43254" 00:16:27.493 }, 00:16:27.493 "auth": { 00:16:27.493 "state": "completed", 00:16:27.493 "digest": "sha256", 00:16:27.493 "dhgroup": "null" 00:16:27.493 } 00:16:27.493 } 00:16:27.493 ]' 00:16:27.493 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.493 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.493 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.493 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:27.493 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.493 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.493 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.493 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.753 21:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:16:28.323 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.323 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.323 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.323 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.323 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.323 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.323 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.323 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.584 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:28.584 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.584 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:28.584 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:28.584 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:28.584 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.584 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.584 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.584 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.584 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.584 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.584 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.584 00:16:28.844 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.844 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.844 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.844 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.844 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.844 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.844 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.844 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.844 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.844 { 00:16:28.844 "cntlid": 5, 00:16:28.844 "qid": 0, 00:16:28.844 "state": "enabled", 00:16:28.844 "thread": "nvmf_tgt_poll_group_000", 00:16:28.844 "listen_address": { 00:16:28.844 "trtype": "TCP", 00:16:28.844 "adrfam": "IPv4", 00:16:28.844 "traddr": "10.0.0.2", 00:16:28.844 "trsvcid": "4420" 00:16:28.844 }, 00:16:28.844 "peer_address": { 00:16:28.844 "trtype": "TCP", 00:16:28.844 "adrfam": "IPv4", 00:16:28.844 "traddr": "10.0.0.1", 00:16:28.844 "trsvcid": "49084" 00:16:28.844 }, 00:16:28.844 "auth": { 00:16:28.844 "state": "completed", 00:16:28.844 "digest": "sha256", 00:16:28.844 "dhgroup": "null" 00:16:28.844 } 00:16:28.844 } 00:16:28.844 ]' 00:16:28.844 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.844 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.844 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.104 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:29.104 21:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.104 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.104 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.104 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.104 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:16:29.674 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.674 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.674 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.674 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.674 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.674 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.674 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.674 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.934 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:29.934 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.934 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.934 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:29.934 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:29.934 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.934 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:29.934 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.934 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.934 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.934 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.935 21:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.194 00:16:30.194 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.194 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.194 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.455 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.455 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.455 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.455 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.455 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.455 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.455 { 00:16:30.455 "cntlid": 7, 00:16:30.455 "qid": 0, 00:16:30.455 "state": "enabled", 00:16:30.455 "thread": "nvmf_tgt_poll_group_000", 00:16:30.455 "listen_address": { 00:16:30.455 "trtype": "TCP", 00:16:30.455 "adrfam": "IPv4", 00:16:30.455 "traddr": "10.0.0.2", 00:16:30.455 "trsvcid": "4420" 00:16:30.455 }, 00:16:30.455 "peer_address": { 00:16:30.455 "trtype": "TCP", 00:16:30.455 "adrfam": "IPv4", 00:16:30.455 "traddr": "10.0.0.1", 00:16:30.455 "trsvcid": "49106" 00:16:30.455 }, 00:16:30.455 "auth": { 00:16:30.455 "state": "completed", 00:16:30.455 "digest": "sha256", 00:16:30.455 "dhgroup": "null" 00:16:30.455 } 00:16:30.455 } 00:16:30.455 ]' 00:16:30.455 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.455 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.455 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.455 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:30.455 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.455 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.455 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.455 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.715 21:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:16:31.285 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.285 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.286 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.286 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.286 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.286 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.286 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.286 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.286 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.546 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:31.546 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.546 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.546 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:31.546 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:31.546 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.546 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.546 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.546 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.546 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.546 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.546 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.807 00:16:31.807 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.807 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.807 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.067 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.067 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.067 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.067 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.067 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.067 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.067 { 00:16:32.067 "cntlid": 9, 00:16:32.067 "qid": 0, 00:16:32.067 "state": "enabled", 00:16:32.067 "thread": "nvmf_tgt_poll_group_000", 00:16:32.067 "listen_address": { 00:16:32.067 "trtype": "TCP", 00:16:32.067 "adrfam": "IPv4", 00:16:32.067 "traddr": "10.0.0.2", 00:16:32.067 "trsvcid": "4420" 00:16:32.067 }, 00:16:32.067 "peer_address": { 00:16:32.067 "trtype": "TCP", 00:16:32.067 "adrfam": "IPv4", 00:16:32.067 "traddr": "10.0.0.1", 00:16:32.067 "trsvcid": "49142" 00:16:32.067 }, 00:16:32.067 "auth": { 00:16:32.067 "state": "completed", 00:16:32.067 "digest": "sha256", 00:16:32.067 "dhgroup": "ffdhe2048" 00:16:32.067 } 00:16:32.067 } 00:16:32.067 ]' 00:16:32.067 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.067 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.067 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.067 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.067 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.067 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.067 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.067 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.327 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.898 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.192 00:16:33.192 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.192 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.192 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.452 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.452 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.452 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.452 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.452 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.452 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.452 { 00:16:33.452 "cntlid": 11, 00:16:33.452 "qid": 0, 00:16:33.452 "state": "enabled", 00:16:33.452 "thread": "nvmf_tgt_poll_group_000", 00:16:33.452 "listen_address": { 00:16:33.452 "trtype": "TCP", 00:16:33.452 "adrfam": "IPv4", 00:16:33.452 "traddr": "10.0.0.2", 00:16:33.452 "trsvcid": "4420" 00:16:33.452 }, 00:16:33.452 "peer_address": { 00:16:33.452 "trtype": "TCP", 00:16:33.452 "adrfam": "IPv4", 00:16:33.452 "traddr": "10.0.0.1", 00:16:33.452 "trsvcid": "49184" 00:16:33.452 }, 00:16:33.452 "auth": { 00:16:33.452 "state": "completed", 00:16:33.452 "digest": "sha256", 00:16:33.452 "dhgroup": "ffdhe2048" 00:16:33.452 } 00:16:33.452 } 00:16:33.452 ]' 00:16:33.452 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.452 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.452 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.452 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.452 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.452 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.452 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.452 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.712 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:16:34.283 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.283 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.283 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.283 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.283 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.283 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.283 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.283 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.543 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:34.543 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.543 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.543 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:34.543 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:34.543 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.543 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.543 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.543 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.543 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.543 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.543 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.543 00:16:34.803 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.803 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.803 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.803 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.803 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.803 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.803 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.803 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.803 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.803 { 00:16:34.803 "cntlid": 13, 00:16:34.803 "qid": 0, 00:16:34.803 "state": "enabled", 00:16:34.803 "thread": "nvmf_tgt_poll_group_000", 00:16:34.803 "listen_address": { 00:16:34.803 "trtype": "TCP", 00:16:34.803 "adrfam": "IPv4", 00:16:34.803 "traddr": "10.0.0.2", 00:16:34.803 "trsvcid": "4420" 00:16:34.803 }, 00:16:34.803 "peer_address": { 00:16:34.803 "trtype": "TCP", 00:16:34.803 "adrfam": "IPv4", 00:16:34.803 "traddr": "10.0.0.1", 00:16:34.803 "trsvcid": "49202" 00:16:34.803 }, 00:16:34.803 "auth": { 00:16:34.803 "state": "completed", 00:16:34.803 "digest": "sha256", 00:16:34.803 "dhgroup": "ffdhe2048" 00:16:34.803 } 00:16:34.803 } 00:16:34.803 ]' 00:16:34.803 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.803 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.803 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.063 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:35.063 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.063 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.063 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.063 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.063 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:16:35.633 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.633 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.633 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.633 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.633 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.633 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.633 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.633 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.893 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:35.893 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.893 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.893 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:35.893 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:35.893 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.893 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:35.893 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.893 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.893 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.893 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.893 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:36.153 00:16:36.153 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.153 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.153 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.413 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.413 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.413 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.413 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.413 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.413 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.413 { 00:16:36.413 "cntlid": 15, 00:16:36.413 "qid": 0, 00:16:36.413 "state": "enabled", 00:16:36.413 "thread": "nvmf_tgt_poll_group_000", 00:16:36.413 "listen_address": { 00:16:36.413 "trtype": "TCP", 00:16:36.413 "adrfam": "IPv4", 00:16:36.413 "traddr": "10.0.0.2", 00:16:36.413 "trsvcid": "4420" 00:16:36.413 }, 00:16:36.413 "peer_address": { 00:16:36.413 "trtype": "TCP", 00:16:36.413 "adrfam": "IPv4", 00:16:36.413 "traddr": "10.0.0.1", 00:16:36.413 "trsvcid": "49236" 00:16:36.413 }, 00:16:36.413 "auth": { 00:16:36.413 "state": "completed", 00:16:36.413 "digest": "sha256", 00:16:36.413 "dhgroup": "ffdhe2048" 00:16:36.413 } 00:16:36.413 } 00:16:36.413 ]' 00:16:36.413 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.413 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.413 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.413 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.413 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.413 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.413 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.413 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.673 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.243 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.503 00:16:37.503 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.503 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.503 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.763 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.763 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.763 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.763 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.763 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.763 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.763 { 00:16:37.763 "cntlid": 17, 00:16:37.763 "qid": 0, 00:16:37.763 "state": "enabled", 00:16:37.763 "thread": "nvmf_tgt_poll_group_000", 00:16:37.763 "listen_address": { 00:16:37.763 "trtype": "TCP", 00:16:37.763 "adrfam": "IPv4", 00:16:37.763 "traddr": "10.0.0.2", 00:16:37.763 "trsvcid": "4420" 00:16:37.763 }, 00:16:37.763 "peer_address": { 00:16:37.763 "trtype": "TCP", 00:16:37.763 "adrfam": "IPv4", 00:16:37.763 "traddr": "10.0.0.1", 00:16:37.763 "trsvcid": "49250" 00:16:37.763 }, 00:16:37.763 "auth": { 00:16:37.763 "state": "completed", 00:16:37.763 "digest": "sha256", 00:16:37.763 "dhgroup": "ffdhe3072" 00:16:37.763 } 00:16:37.763 } 00:16:37.763 ]' 00:16:37.763 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.764 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.764 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.764 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.764 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.024 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.024 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.024 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.024 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:16:38.594 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.594 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.594 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.594 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.594 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.594 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.594 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.594 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.853 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:38.853 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.853 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.853 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:38.853 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:38.853 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.853 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.854 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.854 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.854 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.854 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.854 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.113 00:16:39.113 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.113 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.113 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.113 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.113 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.113 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.113 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.113 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.113 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.113 { 00:16:39.113 "cntlid": 19, 00:16:39.113 "qid": 0, 00:16:39.113 "state": "enabled", 00:16:39.113 "thread": "nvmf_tgt_poll_group_000", 00:16:39.113 "listen_address": { 00:16:39.113 "trtype": "TCP", 00:16:39.113 "adrfam": "IPv4", 00:16:39.113 "traddr": "10.0.0.2", 00:16:39.113 "trsvcid": "4420" 00:16:39.113 }, 00:16:39.113 "peer_address": { 00:16:39.113 "trtype": "TCP", 00:16:39.113 "adrfam": "IPv4", 00:16:39.113 "traddr": "10.0.0.1", 00:16:39.113 "trsvcid": "44992" 00:16:39.113 }, 00:16:39.113 "auth": { 00:16:39.113 "state": "completed", 00:16:39.113 "digest": "sha256", 00:16:39.113 "dhgroup": "ffdhe3072" 00:16:39.113 } 00:16:39.113 } 00:16:39.113 ]' 00:16:39.113 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.373 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.373 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.373 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.373 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.373 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.373 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.373 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.634 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:16:39.894 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.154 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.413 00:16:40.413 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.413 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.413 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.673 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.673 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.673 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.673 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.673 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.673 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.673 { 00:16:40.673 "cntlid": 21, 00:16:40.673 "qid": 0, 00:16:40.673 "state": "enabled", 00:16:40.673 "thread": "nvmf_tgt_poll_group_000", 00:16:40.673 "listen_address": { 00:16:40.673 "trtype": "TCP", 00:16:40.673 "adrfam": "IPv4", 00:16:40.673 "traddr": "10.0.0.2", 00:16:40.673 "trsvcid": "4420" 00:16:40.673 }, 00:16:40.673 "peer_address": { 00:16:40.673 "trtype": "TCP", 00:16:40.673 "adrfam": "IPv4", 00:16:40.673 "traddr": "10.0.0.1", 00:16:40.673 "trsvcid": "45026" 00:16:40.673 }, 00:16:40.673 "auth": { 00:16:40.673 "state": "completed", 00:16:40.673 "digest": "sha256", 00:16:40.673 "dhgroup": "ffdhe3072" 00:16:40.673 } 00:16:40.673 } 00:16:40.673 ]' 00:16:40.673 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.673 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.673 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.673 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.673 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.673 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.673 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.673 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.933 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:16:41.503 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.503 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.503 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.503 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.503 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.503 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.504 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.504 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.764 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:41.764 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.764 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.764 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:41.764 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:41.764 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.764 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:41.764 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.764 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.764 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.764 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.764 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:42.024 00:16:42.024 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.024 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.024 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.284 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.284 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.284 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.284 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.284 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.284 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.284 { 00:16:42.284 "cntlid": 23, 00:16:42.284 "qid": 0, 00:16:42.284 "state": "enabled", 00:16:42.284 "thread": "nvmf_tgt_poll_group_000", 00:16:42.284 "listen_address": { 00:16:42.284 "trtype": "TCP", 00:16:42.284 "adrfam": "IPv4", 00:16:42.284 "traddr": "10.0.0.2", 00:16:42.284 "trsvcid": "4420" 00:16:42.284 }, 00:16:42.284 "peer_address": { 00:16:42.284 "trtype": "TCP", 00:16:42.284 "adrfam": "IPv4", 00:16:42.284 "traddr": "10.0.0.1", 00:16:42.284 "trsvcid": "45050" 00:16:42.284 }, 00:16:42.284 "auth": { 00:16:42.284 "state": "completed", 00:16:42.284 "digest": "sha256", 00:16:42.284 "dhgroup": "ffdhe3072" 00:16:42.284 } 00:16:42.284 } 00:16:42.284 ]' 00:16:42.284 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.284 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.284 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.284 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.284 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.284 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.284 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.284 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.544 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.115 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.374 00:16:43.374 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.374 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.374 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.633 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.633 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.633 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.633 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.633 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.633 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.633 { 00:16:43.633 "cntlid": 25, 00:16:43.633 "qid": 0, 00:16:43.633 "state": "enabled", 00:16:43.633 "thread": "nvmf_tgt_poll_group_000", 00:16:43.633 "listen_address": { 00:16:43.633 "trtype": "TCP", 00:16:43.633 "adrfam": "IPv4", 00:16:43.633 "traddr": "10.0.0.2", 00:16:43.633 "trsvcid": "4420" 00:16:43.633 }, 00:16:43.633 "peer_address": { 00:16:43.633 "trtype": "TCP", 00:16:43.633 "adrfam": "IPv4", 00:16:43.633 "traddr": "10.0.0.1", 00:16:43.633 "trsvcid": "45070" 00:16:43.633 }, 00:16:43.633 "auth": { 00:16:43.633 "state": "completed", 00:16:43.633 "digest": "sha256", 00:16:43.633 "dhgroup": "ffdhe4096" 00:16:43.633 } 00:16:43.633 } 00:16:43.633 ]' 00:16:43.633 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.633 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.633 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.925 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.925 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.925 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.925 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.925 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.925 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:16:44.494 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.494 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.495 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.495 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.495 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.495 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.495 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.495 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.755 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:44.755 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.755 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.755 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:44.755 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:44.755 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.755 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.755 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.755 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.755 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.755 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.755 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.015 00:16:45.015 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.015 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.015 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.015 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.015 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.015 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.015 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.276 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.276 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.276 { 00:16:45.276 "cntlid": 27, 00:16:45.276 "qid": 0, 00:16:45.276 "state": "enabled", 00:16:45.276 "thread": "nvmf_tgt_poll_group_000", 00:16:45.276 "listen_address": { 00:16:45.276 "trtype": "TCP", 00:16:45.276 "adrfam": "IPv4", 00:16:45.276 "traddr": "10.0.0.2", 00:16:45.276 "trsvcid": "4420" 00:16:45.276 }, 00:16:45.276 "peer_address": { 00:16:45.276 "trtype": "TCP", 00:16:45.276 "adrfam": "IPv4", 00:16:45.276 "traddr": "10.0.0.1", 00:16:45.276 "trsvcid": "45106" 00:16:45.276 }, 00:16:45.276 "auth": { 00:16:45.276 "state": "completed", 00:16:45.276 "digest": "sha256", 00:16:45.276 "dhgroup": "ffdhe4096" 00:16:45.276 } 00:16:45.276 } 00:16:45.276 ]' 00:16:45.276 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.276 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.276 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.276 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.276 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.276 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.276 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.276 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.534 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:16:46.105 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.105 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.105 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.105 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.105 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.105 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.105 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.105 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.105 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:46.105 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.105 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.105 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:46.105 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:46.105 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.105 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.105 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.105 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.105 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.105 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.105 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.364 00:16:46.364 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.364 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.364 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.624 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.624 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.624 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.624 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.624 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.624 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.624 { 00:16:46.624 "cntlid": 29, 00:16:46.624 "qid": 0, 00:16:46.624 "state": "enabled", 00:16:46.624 "thread": "nvmf_tgt_poll_group_000", 00:16:46.624 "listen_address": { 00:16:46.624 "trtype": "TCP", 00:16:46.624 "adrfam": "IPv4", 00:16:46.624 "traddr": "10.0.0.2", 00:16:46.624 "trsvcid": "4420" 00:16:46.624 }, 00:16:46.624 "peer_address": { 00:16:46.624 "trtype": "TCP", 00:16:46.624 "adrfam": "IPv4", 00:16:46.624 "traddr": "10.0.0.1", 00:16:46.624 "trsvcid": "45132" 00:16:46.624 }, 00:16:46.624 "auth": { 00:16:46.624 "state": "completed", 00:16:46.624 "digest": "sha256", 00:16:46.624 "dhgroup": "ffdhe4096" 00:16:46.624 } 00:16:46.624 } 00:16:46.624 ]' 00:16:46.624 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.624 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.624 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.624 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.624 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.883 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.883 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.883 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.883 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:16:47.452 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.452 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.452 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.452 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.452 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.452 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.452 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.452 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.711 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:47.711 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.711 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.711 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:47.711 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:47.711 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.711 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:47.711 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.711 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.711 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.711 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.711 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.969 00:16:47.969 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.969 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.969 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.969 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.969 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.969 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.969 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.228 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.228 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.228 { 00:16:48.228 "cntlid": 31, 00:16:48.228 "qid": 0, 00:16:48.228 "state": "enabled", 00:16:48.228 "thread": "nvmf_tgt_poll_group_000", 00:16:48.228 "listen_address": { 00:16:48.228 "trtype": "TCP", 00:16:48.228 "adrfam": "IPv4", 00:16:48.228 "traddr": "10.0.0.2", 00:16:48.228 "trsvcid": "4420" 00:16:48.228 }, 00:16:48.228 "peer_address": { 00:16:48.228 "trtype": "TCP", 00:16:48.228 "adrfam": "IPv4", 00:16:48.228 "traddr": "10.0.0.1", 00:16:48.228 "trsvcid": "45154" 00:16:48.228 }, 00:16:48.228 "auth": { 00:16:48.228 "state": "completed", 00:16:48.228 "digest": "sha256", 00:16:48.228 "dhgroup": "ffdhe4096" 00:16:48.228 } 00:16:48.228 } 00:16:48.228 ]' 00:16:48.228 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.228 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.228 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.228 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.228 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.228 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.228 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.228 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.487 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:16:49.055 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.055 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.055 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.055 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.055 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.055 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.055 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.055 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.055 21:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.055 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:49.055 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.055 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.055 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:49.055 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:49.055 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.055 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.055 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.055 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.055 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.055 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.055 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.315 00:16:49.575 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.575 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.575 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.575 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.575 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.575 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.575 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.575 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.575 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.575 { 00:16:49.575 "cntlid": 33, 00:16:49.575 "qid": 0, 00:16:49.575 "state": "enabled", 00:16:49.575 "thread": "nvmf_tgt_poll_group_000", 00:16:49.575 "listen_address": { 00:16:49.575 "trtype": "TCP", 00:16:49.575 "adrfam": "IPv4", 00:16:49.575 "traddr": "10.0.0.2", 00:16:49.575 "trsvcid": "4420" 00:16:49.575 }, 00:16:49.575 "peer_address": { 00:16:49.575 "trtype": "TCP", 00:16:49.575 "adrfam": "IPv4", 00:16:49.575 "traddr": "10.0.0.1", 00:16:49.575 "trsvcid": "41144" 00:16:49.575 }, 00:16:49.575 "auth": { 00:16:49.575 "state": "completed", 00:16:49.575 "digest": "sha256", 00:16:49.575 "dhgroup": "ffdhe6144" 00:16:49.575 } 00:16:49.575 } 00:16:49.575 ]' 00:16:49.575 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.575 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.575 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.874 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.874 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.874 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.874 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.874 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.874 21:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:16:50.444 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.444 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.444 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.444 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.444 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.444 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.444 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.444 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.704 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:50.704 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.704 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.704 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:50.704 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:50.704 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.704 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.704 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.704 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.704 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.704 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.704 21:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.964 00:16:50.964 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.964 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.964 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.224 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.224 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.224 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.224 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.224 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.224 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.224 { 00:16:51.224 "cntlid": 35, 00:16:51.224 "qid": 0, 00:16:51.224 "state": "enabled", 00:16:51.224 "thread": "nvmf_tgt_poll_group_000", 00:16:51.224 "listen_address": { 00:16:51.224 "trtype": "TCP", 00:16:51.224 "adrfam": "IPv4", 00:16:51.224 "traddr": "10.0.0.2", 00:16:51.224 "trsvcid": "4420" 00:16:51.224 }, 00:16:51.224 "peer_address": { 00:16:51.224 "trtype": "TCP", 00:16:51.224 "adrfam": "IPv4", 00:16:51.224 "traddr": "10.0.0.1", 00:16:51.224 "trsvcid": "41182" 00:16:51.224 }, 00:16:51.224 "auth": { 00:16:51.224 "state": "completed", 00:16:51.224 "digest": "sha256", 00:16:51.224 "dhgroup": "ffdhe6144" 00:16:51.224 } 00:16:51.224 } 00:16:51.224 ]' 00:16:51.224 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.224 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.224 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.224 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.224 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.484 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.484 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.484 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.484 21:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:16:52.053 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.053 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.053 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.053 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.053 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.053 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.053 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.053 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.313 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:52.313 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.313 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.313 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:52.313 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:52.313 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.313 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.313 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.313 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.313 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.313 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.313 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.573 00:16:52.833 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.833 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.833 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.833 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.833 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.833 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.833 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.833 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.833 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.833 { 00:16:52.833 "cntlid": 37, 00:16:52.833 "qid": 0, 00:16:52.833 "state": "enabled", 00:16:52.833 "thread": "nvmf_tgt_poll_group_000", 00:16:52.833 "listen_address": { 00:16:52.833 "trtype": "TCP", 00:16:52.833 "adrfam": "IPv4", 00:16:52.833 "traddr": "10.0.0.2", 00:16:52.833 "trsvcid": "4420" 00:16:52.833 }, 00:16:52.833 "peer_address": { 00:16:52.833 "trtype": "TCP", 00:16:52.833 "adrfam": "IPv4", 00:16:52.833 "traddr": "10.0.0.1", 00:16:52.833 "trsvcid": "41212" 00:16:52.833 }, 00:16:52.833 "auth": { 00:16:52.833 "state": "completed", 00:16:52.833 "digest": "sha256", 00:16:52.833 "dhgroup": "ffdhe6144" 00:16:52.833 } 00:16:52.833 } 00:16:52.833 ]' 00:16:52.833 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.092 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.092 21:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.092 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.092 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.092 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.092 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.092 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.351 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.919 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.920 21:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.178 00:16:54.438 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.438 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.438 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.438 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.438 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.438 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.438 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.438 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.438 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.438 { 00:16:54.438 "cntlid": 39, 00:16:54.438 "qid": 0, 00:16:54.438 "state": "enabled", 00:16:54.438 "thread": "nvmf_tgt_poll_group_000", 00:16:54.438 "listen_address": { 00:16:54.438 "trtype": "TCP", 00:16:54.438 "adrfam": "IPv4", 00:16:54.438 "traddr": "10.0.0.2", 00:16:54.438 "trsvcid": "4420" 00:16:54.438 }, 00:16:54.438 "peer_address": { 00:16:54.438 "trtype": "TCP", 00:16:54.438 "adrfam": "IPv4", 00:16:54.438 "traddr": "10.0.0.1", 00:16:54.438 "trsvcid": "41224" 00:16:54.438 }, 00:16:54.438 "auth": { 00:16:54.438 "state": "completed", 00:16:54.438 "digest": "sha256", 00:16:54.438 "dhgroup": "ffdhe6144" 00:16:54.438 } 00:16:54.438 } 00:16:54.438 ]' 00:16:54.438 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.438 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.438 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.698 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.698 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.698 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.698 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.698 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.698 21:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:16:55.268 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.268 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.268 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.268 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.268 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.268 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.268 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.268 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.268 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.529 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:55.529 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.529 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.529 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:55.529 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:55.529 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.529 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.529 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.529 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.529 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.529 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.529 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.097 00:16:56.097 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.097 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.098 21:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.098 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.098 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.098 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.098 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.098 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.098 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.098 { 00:16:56.098 "cntlid": 41, 00:16:56.098 "qid": 0, 00:16:56.098 "state": "enabled", 00:16:56.098 "thread": "nvmf_tgt_poll_group_000", 00:16:56.098 "listen_address": { 00:16:56.098 "trtype": "TCP", 00:16:56.098 "adrfam": "IPv4", 00:16:56.098 "traddr": "10.0.0.2", 00:16:56.098 "trsvcid": "4420" 00:16:56.098 }, 00:16:56.098 "peer_address": { 00:16:56.098 "trtype": "TCP", 00:16:56.098 "adrfam": "IPv4", 00:16:56.098 "traddr": "10.0.0.1", 00:16:56.098 "trsvcid": "41264" 00:16:56.098 }, 00:16:56.098 "auth": { 00:16:56.098 "state": "completed", 00:16:56.098 "digest": "sha256", 00:16:56.098 "dhgroup": "ffdhe8192" 00:16:56.098 } 00:16:56.098 } 00:16:56.098 ]' 00:16:56.098 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.357 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.357 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.357 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.357 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.357 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.357 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.357 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.617 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:16:56.876 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.137 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.137 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.137 21:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.137 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.707 00:16:57.707 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.707 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.707 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.002 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.002 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.002 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.002 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.002 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.002 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.002 { 00:16:58.002 "cntlid": 43, 00:16:58.002 "qid": 0, 00:16:58.002 "state": "enabled", 00:16:58.002 "thread": "nvmf_tgt_poll_group_000", 00:16:58.002 "listen_address": { 00:16:58.002 "trtype": "TCP", 00:16:58.002 "adrfam": "IPv4", 00:16:58.002 "traddr": "10.0.0.2", 00:16:58.002 "trsvcid": "4420" 00:16:58.002 }, 00:16:58.002 "peer_address": { 00:16:58.002 "trtype": "TCP", 00:16:58.002 "adrfam": "IPv4", 00:16:58.002 "traddr": "10.0.0.1", 00:16:58.002 "trsvcid": "41272" 00:16:58.002 }, 00:16:58.002 "auth": { 00:16:58.002 "state": "completed", 00:16:58.002 "digest": "sha256", 00:16:58.002 "dhgroup": "ffdhe8192" 00:16:58.002 } 00:16:58.002 } 00:16:58.002 ]' 00:16:58.002 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.002 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.002 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.002 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.002 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.002 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.002 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.002 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.261 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.831 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.401 00:16:59.401 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.401 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.402 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.662 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.662 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.662 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.662 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.662 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.662 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.662 { 00:16:59.662 "cntlid": 45, 00:16:59.662 "qid": 0, 00:16:59.662 "state": "enabled", 00:16:59.662 "thread": "nvmf_tgt_poll_group_000", 00:16:59.662 "listen_address": { 00:16:59.662 "trtype": "TCP", 00:16:59.662 "adrfam": "IPv4", 00:16:59.662 "traddr": "10.0.0.2", 00:16:59.662 "trsvcid": "4420" 00:16:59.662 }, 00:16:59.662 "peer_address": { 00:16:59.662 "trtype": "TCP", 00:16:59.662 "adrfam": "IPv4", 00:16:59.662 "traddr": "10.0.0.1", 00:16:59.662 "trsvcid": "55950" 00:16:59.662 }, 00:16:59.662 "auth": { 00:16:59.662 "state": "completed", 00:16:59.662 "digest": "sha256", 00:16:59.662 "dhgroup": "ffdhe8192" 00:16:59.662 } 00:16:59.662 } 00:16:59.662 ]' 00:16:59.662 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.662 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.662 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.662 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.662 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.662 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.662 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.662 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.922 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.492 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.062 00:17:01.062 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.062 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.062 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.321 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.321 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.321 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.321 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.321 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.321 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.321 { 00:17:01.321 "cntlid": 47, 00:17:01.321 "qid": 0, 00:17:01.321 "state": "enabled", 00:17:01.321 "thread": "nvmf_tgt_poll_group_000", 00:17:01.321 "listen_address": { 00:17:01.321 "trtype": "TCP", 00:17:01.321 "adrfam": "IPv4", 00:17:01.321 "traddr": "10.0.0.2", 00:17:01.321 "trsvcid": "4420" 00:17:01.321 }, 00:17:01.321 "peer_address": { 00:17:01.321 "trtype": "TCP", 00:17:01.321 "adrfam": "IPv4", 00:17:01.321 "traddr": "10.0.0.1", 00:17:01.321 "trsvcid": "55986" 00:17:01.321 }, 00:17:01.321 "auth": { 00:17:01.321 "state": "completed", 00:17:01.321 "digest": "sha256", 00:17:01.321 "dhgroup": "ffdhe8192" 00:17:01.321 } 00:17:01.321 } 00:17:01.321 ]' 00:17:01.321 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.321 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.321 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.321 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.321 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.321 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.321 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.321 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.581 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:17:02.149 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.149 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.149 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.149 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.149 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.149 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:02.149 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.149 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.149 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.149 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.409 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:02.409 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.409 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:02.409 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:02.409 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:02.409 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.409 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.409 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.409 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.409 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.409 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.409 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.409 00:17:02.669 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.669 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.669 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.669 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.669 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.669 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.669 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.669 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.669 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.669 { 00:17:02.669 "cntlid": 49, 00:17:02.669 "qid": 0, 00:17:02.669 "state": "enabled", 00:17:02.669 "thread": "nvmf_tgt_poll_group_000", 00:17:02.669 "listen_address": { 00:17:02.669 "trtype": "TCP", 00:17:02.669 "adrfam": "IPv4", 00:17:02.669 "traddr": "10.0.0.2", 00:17:02.669 "trsvcid": "4420" 00:17:02.669 }, 00:17:02.669 "peer_address": { 00:17:02.669 "trtype": "TCP", 00:17:02.669 "adrfam": "IPv4", 00:17:02.669 "traddr": "10.0.0.1", 00:17:02.669 "trsvcid": "56020" 00:17:02.669 }, 00:17:02.669 "auth": { 00:17:02.669 "state": "completed", 00:17:02.669 "digest": "sha384", 00:17:02.669 "dhgroup": "null" 00:17:02.669 } 00:17:02.669 } 00:17:02.669 ]' 00:17:02.669 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.669 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.669 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.929 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:02.929 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.929 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.929 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.929 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.929 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:17:03.499 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.499 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.499 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.499 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.499 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.499 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.499 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.499 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.759 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:03.759 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.759 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:03.759 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:03.759 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:03.759 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.759 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.759 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.759 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.759 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.759 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.759 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.019 00:17:04.019 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.019 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.019 21:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.279 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.279 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.279 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.279 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.279 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.279 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.279 { 00:17:04.279 "cntlid": 51, 00:17:04.279 "qid": 0, 00:17:04.279 "state": "enabled", 00:17:04.279 "thread": "nvmf_tgt_poll_group_000", 00:17:04.279 "listen_address": { 00:17:04.279 "trtype": "TCP", 00:17:04.279 "adrfam": "IPv4", 00:17:04.279 "traddr": "10.0.0.2", 00:17:04.279 "trsvcid": "4420" 00:17:04.279 }, 00:17:04.279 "peer_address": { 00:17:04.279 "trtype": "TCP", 00:17:04.279 "adrfam": "IPv4", 00:17:04.279 "traddr": "10.0.0.1", 00:17:04.279 "trsvcid": "56056" 00:17:04.279 }, 00:17:04.279 "auth": { 00:17:04.279 "state": "completed", 00:17:04.279 "digest": "sha384", 00:17:04.279 "dhgroup": "null" 00:17:04.279 } 00:17:04.279 } 00:17:04.279 ]' 00:17:04.279 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.279 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.279 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.279 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:04.279 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.279 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.279 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.279 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.539 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.110 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.370 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.370 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.370 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.370 00:17:05.370 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.370 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.370 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.629 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.629 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.629 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.629 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.629 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.629 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.629 { 00:17:05.629 "cntlid": 53, 00:17:05.629 "qid": 0, 00:17:05.629 "state": "enabled", 00:17:05.629 "thread": "nvmf_tgt_poll_group_000", 00:17:05.629 "listen_address": { 00:17:05.629 "trtype": "TCP", 00:17:05.629 "adrfam": "IPv4", 00:17:05.630 "traddr": "10.0.0.2", 00:17:05.630 "trsvcid": "4420" 00:17:05.630 }, 00:17:05.630 "peer_address": { 00:17:05.630 "trtype": "TCP", 00:17:05.630 "adrfam": "IPv4", 00:17:05.630 "traddr": "10.0.0.1", 00:17:05.630 "trsvcid": "56090" 00:17:05.630 }, 00:17:05.630 "auth": { 00:17:05.630 "state": "completed", 00:17:05.630 "digest": "sha384", 00:17:05.630 "dhgroup": "null" 00:17:05.630 } 00:17:05.630 } 00:17:05.630 ]' 00:17:05.630 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.630 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.630 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.630 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:05.630 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.890 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.890 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.890 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.890 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:17:06.524 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.524 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.524 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.524 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.524 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.524 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.524 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.524 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.785 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:06.785 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.785 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.785 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:06.785 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:06.785 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.785 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:06.785 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.785 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.785 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.785 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.785 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.045 00:17:07.045 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.045 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.045 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.045 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.045 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.045 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.045 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.305 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.305 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.305 { 00:17:07.305 "cntlid": 55, 00:17:07.305 "qid": 0, 00:17:07.305 "state": "enabled", 00:17:07.305 "thread": "nvmf_tgt_poll_group_000", 00:17:07.305 "listen_address": { 00:17:07.305 "trtype": "TCP", 00:17:07.305 "adrfam": "IPv4", 00:17:07.305 "traddr": "10.0.0.2", 00:17:07.305 "trsvcid": "4420" 00:17:07.305 }, 00:17:07.305 "peer_address": { 00:17:07.305 "trtype": "TCP", 00:17:07.305 "adrfam": "IPv4", 00:17:07.305 "traddr": "10.0.0.1", 00:17:07.305 "trsvcid": "56112" 00:17:07.305 }, 00:17:07.305 "auth": { 00:17:07.305 "state": "completed", 00:17:07.305 "digest": "sha384", 00:17:07.305 "dhgroup": "null" 00:17:07.305 } 00:17:07.305 } 00:17:07.305 ]' 00:17:07.305 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.305 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.305 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.305 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:07.305 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.305 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.305 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.305 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.565 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:17:08.135 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.136 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.396 00:17:08.396 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.396 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.396 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.656 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.656 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.656 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.656 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.656 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.656 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.656 { 00:17:08.656 "cntlid": 57, 00:17:08.656 "qid": 0, 00:17:08.656 "state": "enabled", 00:17:08.656 "thread": "nvmf_tgt_poll_group_000", 00:17:08.656 "listen_address": { 00:17:08.656 "trtype": "TCP", 00:17:08.656 "adrfam": "IPv4", 00:17:08.656 "traddr": "10.0.0.2", 00:17:08.656 "trsvcid": "4420" 00:17:08.656 }, 00:17:08.656 "peer_address": { 00:17:08.656 "trtype": "TCP", 00:17:08.656 "adrfam": "IPv4", 00:17:08.656 "traddr": "10.0.0.1", 00:17:08.656 "trsvcid": "46564" 00:17:08.656 }, 00:17:08.656 "auth": { 00:17:08.656 "state": "completed", 00:17:08.656 "digest": "sha384", 00:17:08.656 "dhgroup": "ffdhe2048" 00:17:08.656 } 00:17:08.656 } 00:17:08.656 ]' 00:17:08.656 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.656 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.656 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.656 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.656 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.916 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.916 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.916 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.916 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:17:09.486 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.486 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.486 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.486 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.486 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.486 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.486 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.486 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.746 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:09.746 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.746 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:09.746 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:09.746 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:09.746 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.746 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.746 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.746 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.746 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.746 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.746 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.005 00:17:10.005 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.005 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.005 21:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.265 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.265 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.265 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.265 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.265 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.265 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.265 { 00:17:10.265 "cntlid": 59, 00:17:10.265 "qid": 0, 00:17:10.265 "state": "enabled", 00:17:10.265 "thread": "nvmf_tgt_poll_group_000", 00:17:10.265 "listen_address": { 00:17:10.265 "trtype": "TCP", 00:17:10.265 "adrfam": "IPv4", 00:17:10.265 "traddr": "10.0.0.2", 00:17:10.265 "trsvcid": "4420" 00:17:10.265 }, 00:17:10.265 "peer_address": { 00:17:10.265 "trtype": "TCP", 00:17:10.265 "adrfam": "IPv4", 00:17:10.265 "traddr": "10.0.0.1", 00:17:10.265 "trsvcid": "46592" 00:17:10.265 }, 00:17:10.265 "auth": { 00:17:10.265 "state": "completed", 00:17:10.265 "digest": "sha384", 00:17:10.265 "dhgroup": "ffdhe2048" 00:17:10.265 } 00:17:10.265 } 00:17:10.265 ]' 00:17:10.265 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.265 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.265 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.265 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.265 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.265 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.265 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.265 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.525 21:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:17:11.094 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.094 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.094 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.094 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.094 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.094 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.094 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.094 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.354 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:11.354 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.354 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.354 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:11.354 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:11.354 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.354 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.354 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.354 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.354 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.354 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.354 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.354 00:17:11.614 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.614 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.614 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.614 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.614 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.614 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.614 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.614 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.614 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.614 { 00:17:11.614 "cntlid": 61, 00:17:11.614 "qid": 0, 00:17:11.614 "state": "enabled", 00:17:11.614 "thread": "nvmf_tgt_poll_group_000", 00:17:11.614 "listen_address": { 00:17:11.614 "trtype": "TCP", 00:17:11.614 "adrfam": "IPv4", 00:17:11.614 "traddr": "10.0.0.2", 00:17:11.614 "trsvcid": "4420" 00:17:11.614 }, 00:17:11.614 "peer_address": { 00:17:11.614 "trtype": "TCP", 00:17:11.614 "adrfam": "IPv4", 00:17:11.614 "traddr": "10.0.0.1", 00:17:11.614 "trsvcid": "46614" 00:17:11.614 }, 00:17:11.614 "auth": { 00:17:11.614 "state": "completed", 00:17:11.614 "digest": "sha384", 00:17:11.614 "dhgroup": "ffdhe2048" 00:17:11.614 } 00:17:11.614 } 00:17:11.614 ]' 00:17:11.614 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.614 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.614 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.873 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.873 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.873 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.873 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.873 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.873 21:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:17:12.440 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.440 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.440 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.440 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.440 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.440 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.440 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.440 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.698 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:12.698 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.698 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:12.698 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:12.698 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:12.698 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.698 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:12.698 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.698 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.698 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.698 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.698 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.958 00:17:12.958 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.958 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.958 21:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.218 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.218 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.218 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.218 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.218 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.218 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.218 { 00:17:13.218 "cntlid": 63, 00:17:13.218 "qid": 0, 00:17:13.218 "state": "enabled", 00:17:13.218 "thread": "nvmf_tgt_poll_group_000", 00:17:13.218 "listen_address": { 00:17:13.218 "trtype": "TCP", 00:17:13.218 "adrfam": "IPv4", 00:17:13.218 "traddr": "10.0.0.2", 00:17:13.218 "trsvcid": "4420" 00:17:13.218 }, 00:17:13.218 "peer_address": { 00:17:13.218 "trtype": "TCP", 00:17:13.218 "adrfam": "IPv4", 00:17:13.218 "traddr": "10.0.0.1", 00:17:13.218 "trsvcid": "46644" 00:17:13.218 }, 00:17:13.218 "auth": { 00:17:13.218 "state": "completed", 00:17:13.218 "digest": "sha384", 00:17:13.218 "dhgroup": "ffdhe2048" 00:17:13.218 } 00:17:13.218 } 00:17:13.218 ]' 00:17:13.218 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.218 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.218 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.218 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.218 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.218 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.218 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.218 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.478 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:17:14.047 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.047 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.047 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.047 21:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.047 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.047 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.047 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.047 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.047 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.306 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:14.306 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.306 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:14.306 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:14.306 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:14.306 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.306 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.306 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.306 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.306 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.306 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.306 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.306 00:17:14.565 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.565 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.565 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.565 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.565 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.565 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.565 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.565 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.565 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.565 { 00:17:14.565 "cntlid": 65, 00:17:14.565 "qid": 0, 00:17:14.565 "state": "enabled", 00:17:14.565 "thread": "nvmf_tgt_poll_group_000", 00:17:14.565 "listen_address": { 00:17:14.565 "trtype": "TCP", 00:17:14.565 "adrfam": "IPv4", 00:17:14.565 "traddr": "10.0.0.2", 00:17:14.566 "trsvcid": "4420" 00:17:14.566 }, 00:17:14.566 "peer_address": { 00:17:14.566 "trtype": "TCP", 00:17:14.566 "adrfam": "IPv4", 00:17:14.566 "traddr": "10.0.0.1", 00:17:14.566 "trsvcid": "46676" 00:17:14.566 }, 00:17:14.566 "auth": { 00:17:14.566 "state": "completed", 00:17:14.566 "digest": "sha384", 00:17:14.566 "dhgroup": "ffdhe3072" 00:17:14.566 } 00:17:14.566 } 00:17:14.566 ]' 00:17:14.566 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.566 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.566 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.566 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.566 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.824 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.824 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.824 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.824 21:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:17:15.401 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.402 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.402 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.402 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.402 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.402 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.402 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.402 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.663 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:15.663 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.663 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.663 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:15.663 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.663 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.663 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.663 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.663 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.663 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.663 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.663 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.922 00:17:15.922 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.922 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.922 21:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.182 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.182 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.182 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.182 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.182 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.182 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.182 { 00:17:16.182 "cntlid": 67, 00:17:16.182 "qid": 0, 00:17:16.182 "state": "enabled", 00:17:16.182 "thread": "nvmf_tgt_poll_group_000", 00:17:16.182 "listen_address": { 00:17:16.182 "trtype": "TCP", 00:17:16.182 "adrfam": "IPv4", 00:17:16.182 "traddr": "10.0.0.2", 00:17:16.182 "trsvcid": "4420" 00:17:16.182 }, 00:17:16.182 "peer_address": { 00:17:16.182 "trtype": "TCP", 00:17:16.182 "adrfam": "IPv4", 00:17:16.182 "traddr": "10.0.0.1", 00:17:16.182 "trsvcid": "46696" 00:17:16.182 }, 00:17:16.182 "auth": { 00:17:16.182 "state": "completed", 00:17:16.182 "digest": "sha384", 00:17:16.182 "dhgroup": "ffdhe3072" 00:17:16.182 } 00:17:16.182 } 00:17:16.182 ]' 00:17:16.182 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.182 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.182 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.182 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.182 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.182 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.182 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.182 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.442 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:17:17.011 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.011 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.011 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.011 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.011 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.011 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.011 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.011 21:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.011 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:17.011 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.011 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.011 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:17.011 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:17.011 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.011 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.011 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.011 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.011 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.011 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.011 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.270 00:17:17.270 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.270 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.270 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.529 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.529 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.529 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.529 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.529 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.529 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.529 { 00:17:17.529 "cntlid": 69, 00:17:17.529 "qid": 0, 00:17:17.529 "state": "enabled", 00:17:17.529 "thread": "nvmf_tgt_poll_group_000", 00:17:17.529 "listen_address": { 00:17:17.529 "trtype": "TCP", 00:17:17.529 "adrfam": "IPv4", 00:17:17.529 "traddr": "10.0.0.2", 00:17:17.529 "trsvcid": "4420" 00:17:17.529 }, 00:17:17.529 "peer_address": { 00:17:17.529 "trtype": "TCP", 00:17:17.529 "adrfam": "IPv4", 00:17:17.529 "traddr": "10.0.0.1", 00:17:17.529 "trsvcid": "46716" 00:17:17.529 }, 00:17:17.529 "auth": { 00:17:17.529 "state": "completed", 00:17:17.529 "digest": "sha384", 00:17:17.529 "dhgroup": "ffdhe3072" 00:17:17.529 } 00:17:17.529 } 00:17:17.529 ]' 00:17:17.529 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.529 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.529 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.529 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.529 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.789 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.789 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.789 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.789 21:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:17:18.358 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.358 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.358 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.358 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.358 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.358 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.358 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.358 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.617 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:18.617 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.617 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:18.617 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:18.617 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:18.617 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.617 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:18.617 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.617 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.617 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.617 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.617 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.876 00:17:18.876 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.876 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.876 21:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.135 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.135 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.135 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.135 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.135 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.135 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.135 { 00:17:19.135 "cntlid": 71, 00:17:19.135 "qid": 0, 00:17:19.135 "state": "enabled", 00:17:19.135 "thread": "nvmf_tgt_poll_group_000", 00:17:19.135 "listen_address": { 00:17:19.135 "trtype": "TCP", 00:17:19.135 "adrfam": "IPv4", 00:17:19.135 "traddr": "10.0.0.2", 00:17:19.135 "trsvcid": "4420" 00:17:19.135 }, 00:17:19.135 "peer_address": { 00:17:19.135 "trtype": "TCP", 00:17:19.135 "adrfam": "IPv4", 00:17:19.135 "traddr": "10.0.0.1", 00:17:19.135 "trsvcid": "50040" 00:17:19.136 }, 00:17:19.136 "auth": { 00:17:19.136 "state": "completed", 00:17:19.136 "digest": "sha384", 00:17:19.136 "dhgroup": "ffdhe3072" 00:17:19.136 } 00:17:19.136 } 00:17:19.136 ]' 00:17:19.136 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.136 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.136 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.136 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.136 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.136 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.136 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.136 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.395 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:17:19.964 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.964 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.964 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.964 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.964 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.964 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.964 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.964 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.964 21:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.224 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:20.224 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.224 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:20.224 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:20.224 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:20.224 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.225 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.225 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.225 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.225 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.225 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.225 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.495 00:17:20.495 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.495 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.495 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.495 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.495 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.495 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.495 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.495 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.495 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.495 { 00:17:20.495 "cntlid": 73, 00:17:20.495 "qid": 0, 00:17:20.495 "state": "enabled", 00:17:20.495 "thread": "nvmf_tgt_poll_group_000", 00:17:20.495 "listen_address": { 00:17:20.495 "trtype": "TCP", 00:17:20.495 "adrfam": "IPv4", 00:17:20.495 "traddr": "10.0.0.2", 00:17:20.495 "trsvcid": "4420" 00:17:20.495 }, 00:17:20.495 "peer_address": { 00:17:20.495 "trtype": "TCP", 00:17:20.495 "adrfam": "IPv4", 00:17:20.495 "traddr": "10.0.0.1", 00:17:20.495 "trsvcid": "50064" 00:17:20.495 }, 00:17:20.495 "auth": { 00:17:20.495 "state": "completed", 00:17:20.495 "digest": "sha384", 00:17:20.495 "dhgroup": "ffdhe4096" 00:17:20.495 } 00:17:20.495 } 00:17:20.495 ]' 00:17:20.495 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.495 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.495 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.754 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.754 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.754 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.754 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.754 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.754 21:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:17:21.323 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.323 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.323 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.323 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.323 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.323 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.323 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.323 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.582 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:21.582 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.582 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.582 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:21.582 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:21.582 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.582 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.583 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.583 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.583 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.583 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.583 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.842 00:17:21.842 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.842 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.842 21:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.101 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.101 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.101 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.101 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.101 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.101 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.101 { 00:17:22.101 "cntlid": 75, 00:17:22.101 "qid": 0, 00:17:22.101 "state": "enabled", 00:17:22.101 "thread": "nvmf_tgt_poll_group_000", 00:17:22.101 "listen_address": { 00:17:22.101 "trtype": "TCP", 00:17:22.101 "adrfam": "IPv4", 00:17:22.101 "traddr": "10.0.0.2", 00:17:22.101 "trsvcid": "4420" 00:17:22.101 }, 00:17:22.101 "peer_address": { 00:17:22.101 "trtype": "TCP", 00:17:22.101 "adrfam": "IPv4", 00:17:22.101 "traddr": "10.0.0.1", 00:17:22.101 "trsvcid": "50100" 00:17:22.101 }, 00:17:22.101 "auth": { 00:17:22.101 "state": "completed", 00:17:22.101 "digest": "sha384", 00:17:22.101 "dhgroup": "ffdhe4096" 00:17:22.101 } 00:17:22.101 } 00:17:22.101 ]' 00:17:22.101 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.101 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.101 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.101 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.101 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.101 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.101 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.101 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.360 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:17:22.972 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.972 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.972 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.972 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.972 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.972 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.972 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.972 21:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.972 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:22.972 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.972 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.972 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:22.972 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:22.972 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.972 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.972 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.972 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.972 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.972 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.972 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.232 00:17:23.232 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.232 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.232 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.490 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.490 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.490 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.490 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.490 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.490 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.490 { 00:17:23.490 "cntlid": 77, 00:17:23.490 "qid": 0, 00:17:23.490 "state": "enabled", 00:17:23.490 "thread": "nvmf_tgt_poll_group_000", 00:17:23.490 "listen_address": { 00:17:23.490 "trtype": "TCP", 00:17:23.490 "adrfam": "IPv4", 00:17:23.490 "traddr": "10.0.0.2", 00:17:23.490 "trsvcid": "4420" 00:17:23.490 }, 00:17:23.490 "peer_address": { 00:17:23.490 "trtype": "TCP", 00:17:23.490 "adrfam": "IPv4", 00:17:23.490 "traddr": "10.0.0.1", 00:17:23.490 "trsvcid": "50136" 00:17:23.490 }, 00:17:23.490 "auth": { 00:17:23.490 "state": "completed", 00:17:23.490 "digest": "sha384", 00:17:23.490 "dhgroup": "ffdhe4096" 00:17:23.490 } 00:17:23.490 } 00:17:23.490 ]' 00:17:23.490 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.490 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.490 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.490 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.490 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.750 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.750 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.750 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.750 21:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:17:24.319 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.319 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.319 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.319 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.319 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.319 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.319 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.319 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.578 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:24.578 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.579 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.579 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:24.579 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:24.579 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.579 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:24.579 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.579 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.579 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.579 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.579 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.837 00:17:24.837 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.837 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.837 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.097 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.097 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.097 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.097 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.097 21:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.097 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.097 { 00:17:25.097 "cntlid": 79, 00:17:25.097 "qid": 0, 00:17:25.097 "state": "enabled", 00:17:25.097 "thread": "nvmf_tgt_poll_group_000", 00:17:25.097 "listen_address": { 00:17:25.097 "trtype": "TCP", 00:17:25.097 "adrfam": "IPv4", 00:17:25.097 "traddr": "10.0.0.2", 00:17:25.097 "trsvcid": "4420" 00:17:25.097 }, 00:17:25.097 "peer_address": { 00:17:25.097 "trtype": "TCP", 00:17:25.097 "adrfam": "IPv4", 00:17:25.097 "traddr": "10.0.0.1", 00:17:25.097 "trsvcid": "50168" 00:17:25.097 }, 00:17:25.097 "auth": { 00:17:25.097 "state": "completed", 00:17:25.097 "digest": "sha384", 00:17:25.097 "dhgroup": "ffdhe4096" 00:17:25.097 } 00:17:25.097 } 00:17:25.097 ]' 00:17:25.097 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.097 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.097 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.097 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.097 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.097 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.097 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.097 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.364 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:17:25.934 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.934 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.934 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.934 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.934 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.934 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.934 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.934 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:25.934 21:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:25.934 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:25.934 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.934 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.934 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:25.934 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:25.934 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.934 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.934 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.934 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.934 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.934 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.934 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.502 00:17:26.503 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.503 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.503 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.503 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.503 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.503 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.503 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.503 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.503 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.503 { 00:17:26.503 "cntlid": 81, 00:17:26.503 "qid": 0, 00:17:26.503 "state": "enabled", 00:17:26.503 "thread": "nvmf_tgt_poll_group_000", 00:17:26.503 "listen_address": { 00:17:26.503 "trtype": "TCP", 00:17:26.503 "adrfam": "IPv4", 00:17:26.503 "traddr": "10.0.0.2", 00:17:26.503 "trsvcid": "4420" 00:17:26.503 }, 00:17:26.503 "peer_address": { 00:17:26.503 "trtype": "TCP", 00:17:26.503 "adrfam": "IPv4", 00:17:26.503 "traddr": "10.0.0.1", 00:17:26.503 "trsvcid": "50200" 00:17:26.503 }, 00:17:26.503 "auth": { 00:17:26.503 "state": "completed", 00:17:26.503 "digest": "sha384", 00:17:26.503 "dhgroup": "ffdhe6144" 00:17:26.503 } 00:17:26.503 } 00:17:26.503 ]' 00:17:26.503 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.503 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.503 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.762 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:26.762 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.762 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.762 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.762 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.762 21:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:17:27.331 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.331 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.331 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.331 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.331 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.331 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.331 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.331 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.591 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:27.591 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.591 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:27.591 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:27.591 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:27.591 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.591 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.591 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.591 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.591 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.591 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.591 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.851 00:17:27.851 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.851 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.851 21:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.111 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.111 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.111 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.111 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.111 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.111 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.111 { 00:17:28.111 "cntlid": 83, 00:17:28.111 "qid": 0, 00:17:28.111 "state": "enabled", 00:17:28.111 "thread": "nvmf_tgt_poll_group_000", 00:17:28.111 "listen_address": { 00:17:28.111 "trtype": "TCP", 00:17:28.111 "adrfam": "IPv4", 00:17:28.111 "traddr": "10.0.0.2", 00:17:28.111 "trsvcid": "4420" 00:17:28.111 }, 00:17:28.111 "peer_address": { 00:17:28.111 "trtype": "TCP", 00:17:28.111 "adrfam": "IPv4", 00:17:28.111 "traddr": "10.0.0.1", 00:17:28.111 "trsvcid": "50224" 00:17:28.111 }, 00:17:28.111 "auth": { 00:17:28.111 "state": "completed", 00:17:28.111 "digest": "sha384", 00:17:28.111 "dhgroup": "ffdhe6144" 00:17:28.111 } 00:17:28.111 } 00:17:28.111 ]' 00:17:28.111 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.111 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.111 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.111 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.111 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.111 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.111 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.111 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.371 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:17:28.941 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.941 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.941 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.941 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.941 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.941 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.941 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.941 21:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.941 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:29.201 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.201 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.202 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:29.202 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:29.202 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.202 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.202 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.202 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.202 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.202 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.202 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.462 00:17:29.462 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.462 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.462 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.722 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.722 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.722 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.722 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.722 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.722 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.722 { 00:17:29.722 "cntlid": 85, 00:17:29.722 "qid": 0, 00:17:29.722 "state": "enabled", 00:17:29.722 "thread": "nvmf_tgt_poll_group_000", 00:17:29.722 "listen_address": { 00:17:29.722 "trtype": "TCP", 00:17:29.722 "adrfam": "IPv4", 00:17:29.722 "traddr": "10.0.0.2", 00:17:29.722 "trsvcid": "4420" 00:17:29.722 }, 00:17:29.722 "peer_address": { 00:17:29.722 "trtype": "TCP", 00:17:29.722 "adrfam": "IPv4", 00:17:29.722 "traddr": "10.0.0.1", 00:17:29.722 "trsvcid": "44398" 00:17:29.722 }, 00:17:29.722 "auth": { 00:17:29.722 "state": "completed", 00:17:29.722 "digest": "sha384", 00:17:29.722 "dhgroup": "ffdhe6144" 00:17:29.722 } 00:17:29.722 } 00:17:29.722 ]' 00:17:29.722 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.722 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.722 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.722 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.722 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.722 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.722 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.722 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.991 21:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.564 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.824 00:17:31.084 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.084 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.084 21:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.084 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.084 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.084 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.084 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.084 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.084 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.084 { 00:17:31.084 "cntlid": 87, 00:17:31.084 "qid": 0, 00:17:31.084 "state": "enabled", 00:17:31.084 "thread": "nvmf_tgt_poll_group_000", 00:17:31.084 "listen_address": { 00:17:31.084 "trtype": "TCP", 00:17:31.084 "adrfam": "IPv4", 00:17:31.084 "traddr": "10.0.0.2", 00:17:31.084 "trsvcid": "4420" 00:17:31.084 }, 00:17:31.084 "peer_address": { 00:17:31.084 "trtype": "TCP", 00:17:31.084 "adrfam": "IPv4", 00:17:31.084 "traddr": "10.0.0.1", 00:17:31.084 "trsvcid": "44424" 00:17:31.084 }, 00:17:31.084 "auth": { 00:17:31.084 "state": "completed", 00:17:31.084 "digest": "sha384", 00:17:31.084 "dhgroup": "ffdhe6144" 00:17:31.084 } 00:17:31.084 } 00:17:31.084 ]' 00:17:31.084 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.084 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.084 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.345 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.345 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.345 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.345 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.345 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.345 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:17:31.915 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.915 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.915 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.915 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.915 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.915 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.915 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.915 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.915 21:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.175 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:32.175 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.175 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:32.175 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:32.175 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:32.175 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.175 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.175 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.175 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.175 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.175 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.175 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.745 00:17:32.745 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.745 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.745 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.745 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.745 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.745 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.745 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.745 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.745 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.745 { 00:17:32.745 "cntlid": 89, 00:17:32.745 "qid": 0, 00:17:32.745 "state": "enabled", 00:17:32.745 "thread": "nvmf_tgt_poll_group_000", 00:17:32.745 "listen_address": { 00:17:32.745 "trtype": "TCP", 00:17:32.745 "adrfam": "IPv4", 00:17:32.745 "traddr": "10.0.0.2", 00:17:32.745 "trsvcid": "4420" 00:17:32.745 }, 00:17:32.745 "peer_address": { 00:17:32.745 "trtype": "TCP", 00:17:32.745 "adrfam": "IPv4", 00:17:32.745 "traddr": "10.0.0.1", 00:17:32.745 "trsvcid": "44444" 00:17:32.745 }, 00:17:32.745 "auth": { 00:17:32.745 "state": "completed", 00:17:32.745 "digest": "sha384", 00:17:32.745 "dhgroup": "ffdhe8192" 00:17:32.745 } 00:17:32.745 } 00:17:32.745 ]' 00:17:32.745 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.005 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.005 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.005 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.005 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.005 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.005 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.005 21:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.265 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.834 21:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.404 00:17:34.404 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.404 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.404 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.664 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.664 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.664 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.664 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.664 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.664 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.664 { 00:17:34.664 "cntlid": 91, 00:17:34.664 "qid": 0, 00:17:34.664 "state": "enabled", 00:17:34.664 "thread": "nvmf_tgt_poll_group_000", 00:17:34.664 "listen_address": { 00:17:34.664 "trtype": "TCP", 00:17:34.664 "adrfam": "IPv4", 00:17:34.664 "traddr": "10.0.0.2", 00:17:34.664 "trsvcid": "4420" 00:17:34.664 }, 00:17:34.664 "peer_address": { 00:17:34.664 "trtype": "TCP", 00:17:34.664 "adrfam": "IPv4", 00:17:34.664 "traddr": "10.0.0.1", 00:17:34.665 "trsvcid": "44474" 00:17:34.665 }, 00:17:34.665 "auth": { 00:17:34.665 "state": "completed", 00:17:34.665 "digest": "sha384", 00:17:34.665 "dhgroup": "ffdhe8192" 00:17:34.665 } 00:17:34.665 } 00:17:34.665 ]' 00:17:34.665 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.665 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.665 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.665 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.665 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.665 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.665 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.665 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.925 21:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.493 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.494 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.494 21:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.064 00:17:36.064 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.064 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.064 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.324 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.324 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.324 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.324 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.324 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.324 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.324 { 00:17:36.324 "cntlid": 93, 00:17:36.324 "qid": 0, 00:17:36.324 "state": "enabled", 00:17:36.324 "thread": "nvmf_tgt_poll_group_000", 00:17:36.324 "listen_address": { 00:17:36.324 "trtype": "TCP", 00:17:36.324 "adrfam": "IPv4", 00:17:36.324 "traddr": "10.0.0.2", 00:17:36.324 "trsvcid": "4420" 00:17:36.324 }, 00:17:36.324 "peer_address": { 00:17:36.324 "trtype": "TCP", 00:17:36.324 "adrfam": "IPv4", 00:17:36.324 "traddr": "10.0.0.1", 00:17:36.324 "trsvcid": "44504" 00:17:36.324 }, 00:17:36.324 "auth": { 00:17:36.324 "state": "completed", 00:17:36.324 "digest": "sha384", 00:17:36.324 "dhgroup": "ffdhe8192" 00:17:36.324 } 00:17:36.324 } 00:17:36.324 ]' 00:17:36.324 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.324 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.324 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.324 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.324 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.324 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.324 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.324 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.584 21:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:17:37.153 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.153 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.153 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.153 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.153 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.153 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.153 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.154 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.413 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:37.413 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.413 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.413 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:37.413 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:37.413 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.414 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:37.414 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.414 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.414 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.414 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.414 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.674 00:17:37.674 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.674 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.674 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.934 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.934 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.934 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.934 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.934 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.934 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.934 { 00:17:37.934 "cntlid": 95, 00:17:37.934 "qid": 0, 00:17:37.934 "state": "enabled", 00:17:37.934 "thread": "nvmf_tgt_poll_group_000", 00:17:37.934 "listen_address": { 00:17:37.934 "trtype": "TCP", 00:17:37.934 "adrfam": "IPv4", 00:17:37.934 "traddr": "10.0.0.2", 00:17:37.934 "trsvcid": "4420" 00:17:37.934 }, 00:17:37.934 "peer_address": { 00:17:37.934 "trtype": "TCP", 00:17:37.934 "adrfam": "IPv4", 00:17:37.934 "traddr": "10.0.0.1", 00:17:37.934 "trsvcid": "44518" 00:17:37.934 }, 00:17:37.934 "auth": { 00:17:37.934 "state": "completed", 00:17:37.934 "digest": "sha384", 00:17:37.934 "dhgroup": "ffdhe8192" 00:17:37.934 } 00:17:37.934 } 00:17:37.934 ]' 00:17:37.934 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.934 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.934 21:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.934 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.934 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.194 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.194 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.194 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.194 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:17:38.763 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.763 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.763 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.763 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.763 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.763 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:38.763 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.763 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.763 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:38.763 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:39.022 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:39.022 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.022 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.022 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:39.022 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:39.022 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.022 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.022 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.022 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.022 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.022 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.022 21:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.283 00:17:39.283 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.283 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.283 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.283 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.283 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.283 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.283 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.283 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.283 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.283 { 00:17:39.283 "cntlid": 97, 00:17:39.283 "qid": 0, 00:17:39.283 "state": "enabled", 00:17:39.283 "thread": "nvmf_tgt_poll_group_000", 00:17:39.283 "listen_address": { 00:17:39.283 "trtype": "TCP", 00:17:39.283 "adrfam": "IPv4", 00:17:39.283 "traddr": "10.0.0.2", 00:17:39.283 "trsvcid": "4420" 00:17:39.283 }, 00:17:39.283 "peer_address": { 00:17:39.283 "trtype": "TCP", 00:17:39.283 "adrfam": "IPv4", 00:17:39.283 "traddr": "10.0.0.1", 00:17:39.283 "trsvcid": "43148" 00:17:39.283 }, 00:17:39.283 "auth": { 00:17:39.283 "state": "completed", 00:17:39.283 "digest": "sha512", 00:17:39.283 "dhgroup": "null" 00:17:39.283 } 00:17:39.283 } 00:17:39.283 ]' 00:17:39.283 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.545 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.545 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.545 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:39.545 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.545 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.545 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.545 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.854 21:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:17:40.114 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.114 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.114 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.374 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.634 00:17:40.634 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.634 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.634 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.894 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.894 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.894 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.894 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.894 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.894 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.894 { 00:17:40.894 "cntlid": 99, 00:17:40.894 "qid": 0, 00:17:40.894 "state": "enabled", 00:17:40.894 "thread": "nvmf_tgt_poll_group_000", 00:17:40.894 "listen_address": { 00:17:40.894 "trtype": "TCP", 00:17:40.894 "adrfam": "IPv4", 00:17:40.894 "traddr": "10.0.0.2", 00:17:40.894 "trsvcid": "4420" 00:17:40.894 }, 00:17:40.894 "peer_address": { 00:17:40.894 "trtype": "TCP", 00:17:40.894 "adrfam": "IPv4", 00:17:40.894 "traddr": "10.0.0.1", 00:17:40.894 "trsvcid": "43174" 00:17:40.894 }, 00:17:40.894 "auth": { 00:17:40.894 "state": "completed", 00:17:40.894 "digest": "sha512", 00:17:40.894 "dhgroup": "null" 00:17:40.894 } 00:17:40.894 } 00:17:40.894 ]' 00:17:40.894 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.894 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.894 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.894 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:40.894 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.894 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.894 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.894 21:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.155 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:17:41.725 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.725 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.725 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.725 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.725 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.725 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.725 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.725 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.985 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:41.985 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.985 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:41.985 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:41.985 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:41.985 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.986 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.986 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.986 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.986 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.986 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.986 21:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.986 00:17:41.986 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.986 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.986 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.245 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.246 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.246 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.246 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.246 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.246 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.246 { 00:17:42.246 "cntlid": 101, 00:17:42.246 "qid": 0, 00:17:42.246 "state": "enabled", 00:17:42.246 "thread": "nvmf_tgt_poll_group_000", 00:17:42.246 "listen_address": { 00:17:42.246 "trtype": "TCP", 00:17:42.246 "adrfam": "IPv4", 00:17:42.246 "traddr": "10.0.0.2", 00:17:42.246 "trsvcid": "4420" 00:17:42.246 }, 00:17:42.246 "peer_address": { 00:17:42.246 "trtype": "TCP", 00:17:42.246 "adrfam": "IPv4", 00:17:42.246 "traddr": "10.0.0.1", 00:17:42.246 "trsvcid": "43192" 00:17:42.246 }, 00:17:42.246 "auth": { 00:17:42.246 "state": "completed", 00:17:42.246 "digest": "sha512", 00:17:42.246 "dhgroup": "null" 00:17:42.246 } 00:17:42.246 } 00:17:42.246 ]' 00:17:42.246 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.246 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.246 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.505 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:42.505 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.505 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.505 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.505 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.505 21:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:17:43.075 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.075 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.075 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.075 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.075 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.075 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.075 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.075 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.336 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:43.336 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.336 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.336 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:43.336 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:43.336 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.336 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:43.336 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.336 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.336 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.336 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.336 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.596 00:17:43.596 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.596 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.596 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.856 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.856 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.856 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.856 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.856 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.856 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.856 { 00:17:43.856 "cntlid": 103, 00:17:43.856 "qid": 0, 00:17:43.856 "state": "enabled", 00:17:43.856 "thread": "nvmf_tgt_poll_group_000", 00:17:43.856 "listen_address": { 00:17:43.856 "trtype": "TCP", 00:17:43.856 "adrfam": "IPv4", 00:17:43.856 "traddr": "10.0.0.2", 00:17:43.856 "trsvcid": "4420" 00:17:43.856 }, 00:17:43.856 "peer_address": { 00:17:43.856 "trtype": "TCP", 00:17:43.856 "adrfam": "IPv4", 00:17:43.856 "traddr": "10.0.0.1", 00:17:43.856 "trsvcid": "43204" 00:17:43.856 }, 00:17:43.856 "auth": { 00:17:43.856 "state": "completed", 00:17:43.856 "digest": "sha512", 00:17:43.856 "dhgroup": "null" 00:17:43.856 } 00:17:43.856 } 00:17:43.856 ]' 00:17:43.856 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.856 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.856 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.856 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:43.856 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.856 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.856 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.856 21:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.116 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.685 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.945 00:17:44.945 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.945 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.945 21:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.205 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.205 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.205 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.205 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.205 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.205 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.205 { 00:17:45.205 "cntlid": 105, 00:17:45.205 "qid": 0, 00:17:45.205 "state": "enabled", 00:17:45.205 "thread": "nvmf_tgt_poll_group_000", 00:17:45.205 "listen_address": { 00:17:45.205 "trtype": "TCP", 00:17:45.205 "adrfam": "IPv4", 00:17:45.205 "traddr": "10.0.0.2", 00:17:45.205 "trsvcid": "4420" 00:17:45.205 }, 00:17:45.205 "peer_address": { 00:17:45.205 "trtype": "TCP", 00:17:45.205 "adrfam": "IPv4", 00:17:45.205 "traddr": "10.0.0.1", 00:17:45.205 "trsvcid": "43230" 00:17:45.205 }, 00:17:45.205 "auth": { 00:17:45.205 "state": "completed", 00:17:45.205 "digest": "sha512", 00:17:45.205 "dhgroup": "ffdhe2048" 00:17:45.205 } 00:17:45.205 } 00:17:45.205 ]' 00:17:45.205 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.205 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.205 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.205 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.205 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.205 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.205 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.205 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.463 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:17:46.032 21:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.032 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.032 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.032 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.032 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.032 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.032 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.032 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.292 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:46.292 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.292 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:46.292 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:46.292 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:46.292 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.292 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.292 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.292 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.292 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.292 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.292 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.552 00:17:46.553 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.553 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.553 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.553 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.553 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.553 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.553 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.553 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.553 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.553 { 00:17:46.553 "cntlid": 107, 00:17:46.553 "qid": 0, 00:17:46.553 "state": "enabled", 00:17:46.553 "thread": "nvmf_tgt_poll_group_000", 00:17:46.553 "listen_address": { 00:17:46.553 "trtype": "TCP", 00:17:46.553 "adrfam": "IPv4", 00:17:46.553 "traddr": "10.0.0.2", 00:17:46.553 "trsvcid": "4420" 00:17:46.553 }, 00:17:46.553 "peer_address": { 00:17:46.553 "trtype": "TCP", 00:17:46.553 "adrfam": "IPv4", 00:17:46.553 "traddr": "10.0.0.1", 00:17:46.553 "trsvcid": "43262" 00:17:46.553 }, 00:17:46.553 "auth": { 00:17:46.553 "state": "completed", 00:17:46.553 "digest": "sha512", 00:17:46.553 "dhgroup": "ffdhe2048" 00:17:46.553 } 00:17:46.553 } 00:17:46.553 ]' 00:17:46.553 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.553 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.553 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.813 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.813 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.813 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.813 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.813 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.813 21:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:17:47.383 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.383 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.383 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.383 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.383 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.383 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.383 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.383 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.644 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:47.644 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.644 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.644 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:47.644 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:47.644 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.644 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.644 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.644 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.644 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.644 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.644 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.903 00:17:47.903 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.903 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.903 21:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.163 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.163 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.163 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.163 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.163 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.163 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.163 { 00:17:48.163 "cntlid": 109, 00:17:48.163 "qid": 0, 00:17:48.163 "state": "enabled", 00:17:48.163 "thread": "nvmf_tgt_poll_group_000", 00:17:48.163 "listen_address": { 00:17:48.163 "trtype": "TCP", 00:17:48.163 "adrfam": "IPv4", 00:17:48.163 "traddr": "10.0.0.2", 00:17:48.163 "trsvcid": "4420" 00:17:48.163 }, 00:17:48.163 "peer_address": { 00:17:48.163 "trtype": "TCP", 00:17:48.163 "adrfam": "IPv4", 00:17:48.163 "traddr": "10.0.0.1", 00:17:48.163 "trsvcid": "43270" 00:17:48.163 }, 00:17:48.163 "auth": { 00:17:48.163 "state": "completed", 00:17:48.163 "digest": "sha512", 00:17:48.163 "dhgroup": "ffdhe2048" 00:17:48.163 } 00:17:48.163 } 00:17:48.163 ]' 00:17:48.163 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.163 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.163 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.163 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.163 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.163 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.163 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.163 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.423 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:17:48.992 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.992 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.992 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.992 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.992 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.992 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.992 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.992 21:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.992 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:48.992 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.992 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.992 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:48.992 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:48.992 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.992 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:48.992 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.993 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.993 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.993 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.993 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.252 00:17:49.252 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.252 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.252 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.512 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.512 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.512 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.512 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.512 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.512 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.512 { 00:17:49.512 "cntlid": 111, 00:17:49.512 "qid": 0, 00:17:49.512 "state": "enabled", 00:17:49.512 "thread": "nvmf_tgt_poll_group_000", 00:17:49.512 "listen_address": { 00:17:49.512 "trtype": "TCP", 00:17:49.512 "adrfam": "IPv4", 00:17:49.512 "traddr": "10.0.0.2", 00:17:49.512 "trsvcid": "4420" 00:17:49.512 }, 00:17:49.512 "peer_address": { 00:17:49.512 "trtype": "TCP", 00:17:49.512 "adrfam": "IPv4", 00:17:49.512 "traddr": "10.0.0.1", 00:17:49.512 "trsvcid": "48612" 00:17:49.512 }, 00:17:49.512 "auth": { 00:17:49.512 "state": "completed", 00:17:49.512 "digest": "sha512", 00:17:49.512 "dhgroup": "ffdhe2048" 00:17:49.512 } 00:17:49.512 } 00:17:49.512 ]' 00:17:49.512 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.512 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.512 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.512 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.512 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.512 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.512 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.512 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.772 21:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:17:50.341 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.341 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.341 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.341 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.341 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.341 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.341 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.341 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.341 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.601 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:50.601 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.601 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.601 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:50.601 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.601 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.601 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.601 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.601 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.601 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.601 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.601 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.861 00:17:50.861 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.861 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.861 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.861 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.861 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.861 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.861 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.861 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.861 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.861 { 00:17:50.861 "cntlid": 113, 00:17:50.861 "qid": 0, 00:17:50.861 "state": "enabled", 00:17:50.861 "thread": "nvmf_tgt_poll_group_000", 00:17:50.861 "listen_address": { 00:17:50.861 "trtype": "TCP", 00:17:50.861 "adrfam": "IPv4", 00:17:50.861 "traddr": "10.0.0.2", 00:17:50.861 "trsvcid": "4420" 00:17:50.861 }, 00:17:50.861 "peer_address": { 00:17:50.861 "trtype": "TCP", 00:17:50.861 "adrfam": "IPv4", 00:17:50.861 "traddr": "10.0.0.1", 00:17:50.861 "trsvcid": "48642" 00:17:50.861 }, 00:17:50.861 "auth": { 00:17:50.861 "state": "completed", 00:17:50.861 "digest": "sha512", 00:17:50.861 "dhgroup": "ffdhe3072" 00:17:50.861 } 00:17:50.861 } 00:17:50.861 ]' 00:17:50.861 21:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.120 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.121 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.121 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.121 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.121 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.121 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.121 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.381 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.951 21:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.211 00:17:52.211 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.211 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.211 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.471 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.471 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.471 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.471 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.471 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.471 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.471 { 00:17:52.471 "cntlid": 115, 00:17:52.471 "qid": 0, 00:17:52.471 "state": "enabled", 00:17:52.471 "thread": "nvmf_tgt_poll_group_000", 00:17:52.471 "listen_address": { 00:17:52.471 "trtype": "TCP", 00:17:52.471 "adrfam": "IPv4", 00:17:52.471 "traddr": "10.0.0.2", 00:17:52.471 "trsvcid": "4420" 00:17:52.471 }, 00:17:52.471 "peer_address": { 00:17:52.471 "trtype": "TCP", 00:17:52.471 "adrfam": "IPv4", 00:17:52.471 "traddr": "10.0.0.1", 00:17:52.471 "trsvcid": "48668" 00:17:52.471 }, 00:17:52.471 "auth": { 00:17:52.471 "state": "completed", 00:17:52.471 "digest": "sha512", 00:17:52.471 "dhgroup": "ffdhe3072" 00:17:52.471 } 00:17:52.471 } 00:17:52.471 ]' 00:17:52.471 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.471 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.471 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.471 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.471 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.471 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.471 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.471 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.731 21:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:17:53.301 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.301 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.301 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.301 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.301 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.301 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.301 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.301 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.301 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:53.301 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.301 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:53.301 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:53.301 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:53.561 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.561 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.561 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.561 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.561 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.561 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.561 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.561 00:17:53.821 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.821 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.821 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.821 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.821 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.821 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.821 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.821 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.821 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.821 { 00:17:53.821 "cntlid": 117, 00:17:53.821 "qid": 0, 00:17:53.821 "state": "enabled", 00:17:53.821 "thread": "nvmf_tgt_poll_group_000", 00:17:53.821 "listen_address": { 00:17:53.821 "trtype": "TCP", 00:17:53.821 "adrfam": "IPv4", 00:17:53.821 "traddr": "10.0.0.2", 00:17:53.821 "trsvcid": "4420" 00:17:53.821 }, 00:17:53.821 "peer_address": { 00:17:53.821 "trtype": "TCP", 00:17:53.821 "adrfam": "IPv4", 00:17:53.821 "traddr": "10.0.0.1", 00:17:53.821 "trsvcid": "48698" 00:17:53.821 }, 00:17:53.821 "auth": { 00:17:53.821 "state": "completed", 00:17:53.821 "digest": "sha512", 00:17:53.821 "dhgroup": "ffdhe3072" 00:17:53.821 } 00:17:53.821 } 00:17:53.821 ]' 00:17:53.821 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.821 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.821 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.080 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.080 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.080 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.080 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.080 21:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.080 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:17:54.646 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.646 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.646 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.646 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.646 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.646 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.647 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.647 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.906 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:54.906 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.906 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.906 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:54.906 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:54.906 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.906 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:54.906 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.906 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.906 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.906 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.906 21:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.165 00:17:55.165 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.165 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.165 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.424 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.424 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.424 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.424 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.424 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.424 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.424 { 00:17:55.424 "cntlid": 119, 00:17:55.424 "qid": 0, 00:17:55.424 "state": "enabled", 00:17:55.424 "thread": "nvmf_tgt_poll_group_000", 00:17:55.424 "listen_address": { 00:17:55.424 "trtype": "TCP", 00:17:55.424 "adrfam": "IPv4", 00:17:55.424 "traddr": "10.0.0.2", 00:17:55.424 "trsvcid": "4420" 00:17:55.424 }, 00:17:55.424 "peer_address": { 00:17:55.424 "trtype": "TCP", 00:17:55.424 "adrfam": "IPv4", 00:17:55.424 "traddr": "10.0.0.1", 00:17:55.424 "trsvcid": "48732" 00:17:55.424 }, 00:17:55.424 "auth": { 00:17:55.424 "state": "completed", 00:17:55.424 "digest": "sha512", 00:17:55.424 "dhgroup": "ffdhe3072" 00:17:55.424 } 00:17:55.424 } 00:17:55.424 ]' 00:17:55.424 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.424 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.424 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.424 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:55.424 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.424 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.424 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.424 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.683 21:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.309 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.594 00:17:56.594 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.594 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.594 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.853 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.853 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.853 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.853 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.853 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.853 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.853 { 00:17:56.853 "cntlid": 121, 00:17:56.853 "qid": 0, 00:17:56.853 "state": "enabled", 00:17:56.853 "thread": "nvmf_tgt_poll_group_000", 00:17:56.853 "listen_address": { 00:17:56.853 "trtype": "TCP", 00:17:56.853 "adrfam": "IPv4", 00:17:56.853 "traddr": "10.0.0.2", 00:17:56.853 "trsvcid": "4420" 00:17:56.853 }, 00:17:56.853 "peer_address": { 00:17:56.853 "trtype": "TCP", 00:17:56.853 "adrfam": "IPv4", 00:17:56.853 "traddr": "10.0.0.1", 00:17:56.853 "trsvcid": "48752" 00:17:56.853 }, 00:17:56.853 "auth": { 00:17:56.853 "state": "completed", 00:17:56.853 "digest": "sha512", 00:17:56.853 "dhgroup": "ffdhe4096" 00:17:56.853 } 00:17:56.853 } 00:17:56.853 ]' 00:17:56.853 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.853 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.853 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.853 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.853 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.853 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.853 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.853 21:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.112 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:17:57.680 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.681 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.681 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.681 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.681 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.681 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.681 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.681 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.940 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:57.940 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.940 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.940 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:57.940 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.940 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.940 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.940 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.940 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.940 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.940 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.940 21:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.199 00:17:58.199 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.199 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.199 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.199 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.199 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.199 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.199 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.199 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.199 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.199 { 00:17:58.199 "cntlid": 123, 00:17:58.199 "qid": 0, 00:17:58.199 "state": "enabled", 00:17:58.199 "thread": "nvmf_tgt_poll_group_000", 00:17:58.199 "listen_address": { 00:17:58.199 "trtype": "TCP", 00:17:58.199 "adrfam": "IPv4", 00:17:58.199 "traddr": "10.0.0.2", 00:17:58.199 "trsvcid": "4420" 00:17:58.199 }, 00:17:58.199 "peer_address": { 00:17:58.199 "trtype": "TCP", 00:17:58.199 "adrfam": "IPv4", 00:17:58.199 "traddr": "10.0.0.1", 00:17:58.199 "trsvcid": "48788" 00:17:58.199 }, 00:17:58.199 "auth": { 00:17:58.199 "state": "completed", 00:17:58.199 "digest": "sha512", 00:17:58.199 "dhgroup": "ffdhe4096" 00:17:58.199 } 00:17:58.199 } 00:17:58.199 ]' 00:17:58.459 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.459 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.459 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.459 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.459 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.459 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.459 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.459 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.718 21:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.287 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.546 00:17:59.546 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.546 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.546 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.805 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.805 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.805 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.805 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.805 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.805 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.805 { 00:17:59.805 "cntlid": 125, 00:17:59.805 "qid": 0, 00:17:59.805 "state": "enabled", 00:17:59.805 "thread": "nvmf_tgt_poll_group_000", 00:17:59.805 "listen_address": { 00:17:59.805 "trtype": "TCP", 00:17:59.805 "adrfam": "IPv4", 00:17:59.805 "traddr": "10.0.0.2", 00:17:59.805 "trsvcid": "4420" 00:17:59.805 }, 00:17:59.805 "peer_address": { 00:17:59.805 "trtype": "TCP", 00:17:59.805 "adrfam": "IPv4", 00:17:59.805 "traddr": "10.0.0.1", 00:17:59.805 "trsvcid": "37370" 00:17:59.805 }, 00:17:59.805 "auth": { 00:17:59.805 "state": "completed", 00:17:59.805 "digest": "sha512", 00:17:59.805 "dhgroup": "ffdhe4096" 00:17:59.805 } 00:17:59.805 } 00:17:59.805 ]' 00:17:59.805 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.805 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.805 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.805 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.805 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.065 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.065 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.065 21:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.065 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:18:00.635 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.635 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.635 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.635 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.635 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.635 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.635 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.635 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.895 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:00.895 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.895 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.895 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:00.895 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:00.895 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.895 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:00.895 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.895 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.895 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.895 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.895 21:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.154 00:18:01.154 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.154 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.154 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.154 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.413 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.413 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.413 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.413 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.413 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.413 { 00:18:01.413 "cntlid": 127, 00:18:01.413 "qid": 0, 00:18:01.413 "state": "enabled", 00:18:01.413 "thread": "nvmf_tgt_poll_group_000", 00:18:01.413 "listen_address": { 00:18:01.413 "trtype": "TCP", 00:18:01.413 "adrfam": "IPv4", 00:18:01.413 "traddr": "10.0.0.2", 00:18:01.413 "trsvcid": "4420" 00:18:01.413 }, 00:18:01.413 "peer_address": { 00:18:01.413 "trtype": "TCP", 00:18:01.413 "adrfam": "IPv4", 00:18:01.413 "traddr": "10.0.0.1", 00:18:01.413 "trsvcid": "37404" 00:18:01.413 }, 00:18:01.413 "auth": { 00:18:01.413 "state": "completed", 00:18:01.413 "digest": "sha512", 00:18:01.413 "dhgroup": "ffdhe4096" 00:18:01.413 } 00:18:01.413 } 00:18:01.413 ]' 00:18:01.413 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.413 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.413 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.413 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.413 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.413 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.413 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.413 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.673 21:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.243 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.811 00:18:02.811 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.811 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.811 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.811 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.811 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.811 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.811 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.811 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.811 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.811 { 00:18:02.811 "cntlid": 129, 00:18:02.812 "qid": 0, 00:18:02.812 "state": "enabled", 00:18:02.812 "thread": "nvmf_tgt_poll_group_000", 00:18:02.812 "listen_address": { 00:18:02.812 "trtype": "TCP", 00:18:02.812 "adrfam": "IPv4", 00:18:02.812 "traddr": "10.0.0.2", 00:18:02.812 "trsvcid": "4420" 00:18:02.812 }, 00:18:02.812 "peer_address": { 00:18:02.812 "trtype": "TCP", 00:18:02.812 "adrfam": "IPv4", 00:18:02.812 "traddr": "10.0.0.1", 00:18:02.812 "trsvcid": "37444" 00:18:02.812 }, 00:18:02.812 "auth": { 00:18:02.812 "state": "completed", 00:18:02.812 "digest": "sha512", 00:18:02.812 "dhgroup": "ffdhe6144" 00:18:02.812 } 00:18:02.812 } 00:18:02.812 ]' 00:18:02.812 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.812 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.812 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.072 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.072 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.072 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.072 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.072 21:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.072 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:18:03.642 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.642 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:03.642 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.642 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.642 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.642 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.642 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:03.642 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:03.901 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:03.901 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.901 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.901 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:03.901 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:03.901 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.901 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.901 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.901 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.902 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.902 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.902 21:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.160 00:18:04.160 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.160 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.160 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.419 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.419 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.419 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.419 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.419 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.419 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.419 { 00:18:04.419 "cntlid": 131, 00:18:04.419 "qid": 0, 00:18:04.419 "state": "enabled", 00:18:04.419 "thread": "nvmf_tgt_poll_group_000", 00:18:04.419 "listen_address": { 00:18:04.419 "trtype": "TCP", 00:18:04.419 "adrfam": "IPv4", 00:18:04.419 "traddr": "10.0.0.2", 00:18:04.419 "trsvcid": "4420" 00:18:04.419 }, 00:18:04.419 "peer_address": { 00:18:04.419 "trtype": "TCP", 00:18:04.419 "adrfam": "IPv4", 00:18:04.419 "traddr": "10.0.0.1", 00:18:04.419 "trsvcid": "37464" 00:18:04.419 }, 00:18:04.419 "auth": { 00:18:04.419 "state": "completed", 00:18:04.419 "digest": "sha512", 00:18:04.419 "dhgroup": "ffdhe6144" 00:18:04.419 } 00:18:04.419 } 00:18:04.419 ]' 00:18:04.419 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.419 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.419 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.419 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.419 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.419 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.419 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.420 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.679 21:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:18:05.250 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.250 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:05.250 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.250 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.250 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.250 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.250 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.250 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.509 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:05.509 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.509 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.509 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.509 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:05.509 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.509 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.509 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.509 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.509 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.509 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.509 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.768 00:18:05.768 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.768 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.768 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.028 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.028 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.028 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.028 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.028 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.028 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.028 { 00:18:06.028 "cntlid": 133, 00:18:06.028 "qid": 0, 00:18:06.028 "state": "enabled", 00:18:06.028 "thread": "nvmf_tgt_poll_group_000", 00:18:06.028 "listen_address": { 00:18:06.028 "trtype": "TCP", 00:18:06.028 "adrfam": "IPv4", 00:18:06.028 "traddr": "10.0.0.2", 00:18:06.028 "trsvcid": "4420" 00:18:06.028 }, 00:18:06.028 "peer_address": { 00:18:06.028 "trtype": "TCP", 00:18:06.028 "adrfam": "IPv4", 00:18:06.028 "traddr": "10.0.0.1", 00:18:06.028 "trsvcid": "37492" 00:18:06.028 }, 00:18:06.028 "auth": { 00:18:06.028 "state": "completed", 00:18:06.028 "digest": "sha512", 00:18:06.028 "dhgroup": "ffdhe6144" 00:18:06.028 } 00:18:06.028 } 00:18:06.028 ]' 00:18:06.028 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.028 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.028 21:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.028 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.028 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.028 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.028 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.028 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.288 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:18:06.857 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.857 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.857 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.857 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.857 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.857 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.857 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.857 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.117 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:07.117 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.117 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.117 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:07.117 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:07.117 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.117 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:07.117 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.117 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.117 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.117 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.117 21:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.377 00:18:07.377 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.377 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.377 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.636 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.636 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.636 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.636 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.636 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.636 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.636 { 00:18:07.636 "cntlid": 135, 00:18:07.636 "qid": 0, 00:18:07.636 "state": "enabled", 00:18:07.636 "thread": "nvmf_tgt_poll_group_000", 00:18:07.636 "listen_address": { 00:18:07.636 "trtype": "TCP", 00:18:07.636 "adrfam": "IPv4", 00:18:07.636 "traddr": "10.0.0.2", 00:18:07.636 "trsvcid": "4420" 00:18:07.636 }, 00:18:07.636 "peer_address": { 00:18:07.636 "trtype": "TCP", 00:18:07.636 "adrfam": "IPv4", 00:18:07.636 "traddr": "10.0.0.1", 00:18:07.636 "trsvcid": "37508" 00:18:07.636 }, 00:18:07.636 "auth": { 00:18:07.636 "state": "completed", 00:18:07.636 "digest": "sha512", 00:18:07.636 "dhgroup": "ffdhe6144" 00:18:07.636 } 00:18:07.636 } 00:18:07.636 ]' 00:18:07.636 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.636 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.636 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.636 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.636 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.636 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.636 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.636 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.896 21:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.466 21:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.035 00:18:09.035 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.035 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.035 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.294 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.294 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.294 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.294 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.294 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.294 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.294 { 00:18:09.294 "cntlid": 137, 00:18:09.294 "qid": 0, 00:18:09.294 "state": "enabled", 00:18:09.294 "thread": "nvmf_tgt_poll_group_000", 00:18:09.294 "listen_address": { 00:18:09.294 "trtype": "TCP", 00:18:09.294 "adrfam": "IPv4", 00:18:09.294 "traddr": "10.0.0.2", 00:18:09.294 "trsvcid": "4420" 00:18:09.294 }, 00:18:09.294 "peer_address": { 00:18:09.294 "trtype": "TCP", 00:18:09.294 "adrfam": "IPv4", 00:18:09.294 "traddr": "10.0.0.1", 00:18:09.294 "trsvcid": "57548" 00:18:09.294 }, 00:18:09.294 "auth": { 00:18:09.294 "state": "completed", 00:18:09.294 "digest": "sha512", 00:18:09.294 "dhgroup": "ffdhe8192" 00:18:09.294 } 00:18:09.294 } 00:18:09.294 ]' 00:18:09.294 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.294 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.294 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.294 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.294 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.294 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.294 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.295 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.554 21:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.125 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.384 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.384 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.384 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.644 00:18:10.644 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.644 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.644 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.902 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.902 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.902 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.902 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.902 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.902 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.902 { 00:18:10.902 "cntlid": 139, 00:18:10.902 "qid": 0, 00:18:10.902 "state": "enabled", 00:18:10.902 "thread": "nvmf_tgt_poll_group_000", 00:18:10.902 "listen_address": { 00:18:10.902 "trtype": "TCP", 00:18:10.902 "adrfam": "IPv4", 00:18:10.902 "traddr": "10.0.0.2", 00:18:10.902 "trsvcid": "4420" 00:18:10.902 }, 00:18:10.902 "peer_address": { 00:18:10.902 "trtype": "TCP", 00:18:10.902 "adrfam": "IPv4", 00:18:10.902 "traddr": "10.0.0.1", 00:18:10.902 "trsvcid": "57562" 00:18:10.902 }, 00:18:10.902 "auth": { 00:18:10.902 "state": "completed", 00:18:10.902 "digest": "sha512", 00:18:10.902 "dhgroup": "ffdhe8192" 00:18:10.902 } 00:18:10.902 } 00:18:10.902 ]' 00:18:10.902 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.902 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.902 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.902 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.902 21:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.160 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.160 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.160 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.161 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGRiYmU2MmNlYzAyN2RkNzNjY2ZmYWU5MjBmODcxM2ZEWpGt: --dhchap-ctrl-secret DHHC-1:02:MmMxMjljNTE2YTRkMzk3ZTAxZTA0YjE4ZTE2OTJiNDQ5NzRkNjM1MTE5YWFiYWY1iNA/kQ==: 00:18:11.730 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.730 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.730 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.730 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.730 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.730 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.730 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.730 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.989 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:11.989 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.989 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:11.989 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:11.989 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.989 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.990 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.990 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.990 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.990 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.990 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.990 21:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.558 00:18:12.558 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.558 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.558 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.558 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.558 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.558 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.558 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.558 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.558 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.558 { 00:18:12.558 "cntlid": 141, 00:18:12.558 "qid": 0, 00:18:12.558 "state": "enabled", 00:18:12.558 "thread": "nvmf_tgt_poll_group_000", 00:18:12.558 "listen_address": { 00:18:12.558 "trtype": "TCP", 00:18:12.558 "adrfam": "IPv4", 00:18:12.558 "traddr": "10.0.0.2", 00:18:12.558 "trsvcid": "4420" 00:18:12.558 }, 00:18:12.558 "peer_address": { 00:18:12.558 "trtype": "TCP", 00:18:12.558 "adrfam": "IPv4", 00:18:12.558 "traddr": "10.0.0.1", 00:18:12.558 "trsvcid": "57598" 00:18:12.558 }, 00:18:12.558 "auth": { 00:18:12.558 "state": "completed", 00:18:12.558 "digest": "sha512", 00:18:12.558 "dhgroup": "ffdhe8192" 00:18:12.558 } 00:18:12.558 } 00:18:12.558 ]' 00:18:12.558 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.817 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.817 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.817 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.817 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.817 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.817 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.817 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.125 21:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MGVlZDk0YjA3MDJhY2ZhYzZiMWQ4NWRiMGQ3MWJjZTc5YjBmYWNlNjVlYWJiYzEyUjuNdg==: --dhchap-ctrl-secret DHHC-1:01:Yzg4Yzg0OWY5YjlmYjNhZmM0ZWY0ZDdiMjBlMjIwYjHQiWgf: 00:18:13.407 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.407 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:13.407 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.407 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.407 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.407 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.407 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.407 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.666 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:13.666 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.666 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:13.666 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:13.666 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.666 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.666 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:13.666 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.666 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.666 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.666 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.666 21:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.235 00:18:14.235 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.235 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.235 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.495 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.495 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.495 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.495 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.495 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.495 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.495 { 00:18:14.495 "cntlid": 143, 00:18:14.495 "qid": 0, 00:18:14.495 "state": "enabled", 00:18:14.495 "thread": "nvmf_tgt_poll_group_000", 00:18:14.495 "listen_address": { 00:18:14.495 "trtype": "TCP", 00:18:14.495 "adrfam": "IPv4", 00:18:14.495 "traddr": "10.0.0.2", 00:18:14.495 "trsvcid": "4420" 00:18:14.495 }, 00:18:14.495 "peer_address": { 00:18:14.495 "trtype": "TCP", 00:18:14.495 "adrfam": "IPv4", 00:18:14.495 "traddr": "10.0.0.1", 00:18:14.495 "trsvcid": "57620" 00:18:14.495 }, 00:18:14.495 "auth": { 00:18:14.495 "state": "completed", 00:18:14.495 "digest": "sha512", 00:18:14.495 "dhgroup": "ffdhe8192" 00:18:14.495 } 00:18:14.495 } 00:18:14.495 ]' 00:18:14.495 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.495 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.495 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.495 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.495 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.495 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.495 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.495 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.755 21:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.329 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.901 00:18:15.901 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.901 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.901 21:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.161 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.161 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.161 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.161 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.161 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.161 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.161 { 00:18:16.161 "cntlid": 145, 00:18:16.161 "qid": 0, 00:18:16.161 "state": "enabled", 00:18:16.161 "thread": "nvmf_tgt_poll_group_000", 00:18:16.161 "listen_address": { 00:18:16.161 "trtype": "TCP", 00:18:16.161 "adrfam": "IPv4", 00:18:16.161 "traddr": "10.0.0.2", 00:18:16.161 "trsvcid": "4420" 00:18:16.161 }, 00:18:16.161 "peer_address": { 00:18:16.161 "trtype": "TCP", 00:18:16.161 "adrfam": "IPv4", 00:18:16.161 "traddr": "10.0.0.1", 00:18:16.161 "trsvcid": "57642" 00:18:16.161 }, 00:18:16.161 "auth": { 00:18:16.161 "state": "completed", 00:18:16.161 "digest": "sha512", 00:18:16.161 "dhgroup": "ffdhe8192" 00:18:16.161 } 00:18:16.161 } 00:18:16.161 ]' 00:18:16.161 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.161 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.161 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.161 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.161 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.161 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.161 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.161 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.421 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MGI1OGJmODQ5NGY1NzdiNDdhYTBlYzhmYjVmMmQ0NDQyM2U3NThlZjg1Y2Q4Nzlhn5i93Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjNjJjYzgyNTExODc3ZWYwZDI0ZmYyZjFlMDU2MDJjN2E0MTg3MmI5ZmMwMmE3YjljYjYwODkyOGQzNmQ5OfIyk24=: 00:18:16.991 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.991 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.991 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.991 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.991 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.992 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:16.992 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.992 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.992 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.992 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:16.992 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:16.992 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:16.992 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:16.992 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:16.992 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:16.992 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:16.992 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:16.992 21:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:17.560 request: 00:18:17.560 { 00:18:17.560 "name": "nvme0", 00:18:17.560 "trtype": "tcp", 00:18:17.560 "traddr": "10.0.0.2", 00:18:17.560 "adrfam": "ipv4", 00:18:17.560 "trsvcid": "4420", 00:18:17.560 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:17.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:17.561 "prchk_reftag": false, 00:18:17.561 "prchk_guard": false, 00:18:17.561 "hdgst": false, 00:18:17.561 "ddgst": false, 00:18:17.561 "dhchap_key": "key2", 00:18:17.561 "method": "bdev_nvme_attach_controller", 00:18:17.561 "req_id": 1 00:18:17.561 } 00:18:17.561 Got JSON-RPC error response 00:18:17.561 response: 00:18:17.561 { 00:18:17.561 "code": -5, 00:18:17.561 "message": "Input/output error" 00:18:17.561 } 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:17.561 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:17.819 request: 00:18:17.819 { 00:18:17.819 "name": "nvme0", 00:18:17.819 "trtype": "tcp", 00:18:17.819 "traddr": "10.0.0.2", 00:18:17.819 "adrfam": "ipv4", 00:18:17.819 "trsvcid": "4420", 00:18:17.819 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:17.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:17.819 "prchk_reftag": false, 00:18:17.819 "prchk_guard": false, 00:18:17.819 "hdgst": false, 00:18:17.819 "ddgst": false, 00:18:17.819 "dhchap_key": "key1", 00:18:17.819 "dhchap_ctrlr_key": "ckey2", 00:18:17.819 "method": "bdev_nvme_attach_controller", 00:18:17.819 "req_id": 1 00:18:17.819 } 00:18:17.819 Got JSON-RPC error response 00:18:17.819 response: 00:18:17.819 { 00:18:17.819 "code": -5, 00:18:17.819 "message": "Input/output error" 00:18:17.819 } 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:17.819 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.820 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.820 21:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.389 request: 00:18:18.389 { 00:18:18.389 "name": "nvme0", 00:18:18.389 "trtype": "tcp", 00:18:18.389 "traddr": "10.0.0.2", 00:18:18.389 "adrfam": "ipv4", 00:18:18.389 "trsvcid": "4420", 00:18:18.389 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:18.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:18.389 "prchk_reftag": false, 00:18:18.389 "prchk_guard": false, 00:18:18.389 "hdgst": false, 00:18:18.389 "ddgst": false, 00:18:18.389 "dhchap_key": "key1", 00:18:18.389 "dhchap_ctrlr_key": "ckey1", 00:18:18.389 "method": "bdev_nvme_attach_controller", 00:18:18.389 "req_id": 1 00:18:18.389 } 00:18:18.389 Got JSON-RPC error response 00:18:18.389 response: 00:18:18.389 { 00:18:18.389 "code": -5, 00:18:18.389 "message": "Input/output error" 00:18:18.389 } 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3052462 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3052462 ']' 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3052462 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3052462 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3052462' 00:18:18.389 killing process with pid 3052462 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3052462 00:18:18.389 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3052462 00:18:18.649 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:18.649 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.649 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:18.649 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.649 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3072959 00:18:18.649 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:18.649 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3072959 00:18:18.649 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3072959 ']' 00:18:18.649 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.649 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.649 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.649 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.649 21:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3072959 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3072959 ']' 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.589 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.850 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.850 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:19.850 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.850 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.850 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:19.850 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.850 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.850 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:19.850 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.850 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.850 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.850 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.850 21:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.419 00:18:20.419 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.419 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.419 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.419 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.419 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.419 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.419 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.419 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.419 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.419 { 00:18:20.419 "cntlid": 1, 00:18:20.419 "qid": 0, 00:18:20.419 "state": "enabled", 00:18:20.419 "thread": "nvmf_tgt_poll_group_000", 00:18:20.419 "listen_address": { 00:18:20.419 "trtype": "TCP", 00:18:20.419 "adrfam": "IPv4", 00:18:20.419 "traddr": "10.0.0.2", 00:18:20.419 "trsvcid": "4420" 00:18:20.419 }, 00:18:20.419 "peer_address": { 00:18:20.419 "trtype": "TCP", 00:18:20.419 "adrfam": "IPv4", 00:18:20.419 "traddr": "10.0.0.1", 00:18:20.419 "trsvcid": "43528" 00:18:20.419 }, 00:18:20.419 "auth": { 00:18:20.419 "state": "completed", 00:18:20.419 "digest": "sha512", 00:18:20.419 "dhgroup": "ffdhe8192" 00:18:20.419 } 00:18:20.419 } 00:18:20.419 ]' 00:18:20.419 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.419 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.419 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.419 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.419 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.679 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.679 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.679 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.679 21:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWI1OTFiN2FkMjMxODQxOWY0ODY0ZGM5NzBhZjE3OTVjODlhMWM1YzQwNjQyMWE5ZDAxNTViNzFhMGE3OGUyYioDt44=: 00:18:21.248 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.248 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:21.248 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.248 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.248 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.248 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:21.248 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.248 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.248 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.248 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:21.248 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:21.507 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.507 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:21.507 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.507 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:21.507 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.507 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:21.507 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.507 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.507 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.766 request: 00:18:21.766 { 00:18:21.766 "name": "nvme0", 00:18:21.766 "trtype": "tcp", 00:18:21.766 "traddr": "10.0.0.2", 00:18:21.766 "adrfam": "ipv4", 00:18:21.766 "trsvcid": "4420", 00:18:21.766 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:21.766 "prchk_reftag": false, 00:18:21.766 "prchk_guard": false, 00:18:21.766 "hdgst": false, 00:18:21.766 "ddgst": false, 00:18:21.766 "dhchap_key": "key3", 00:18:21.766 "method": "bdev_nvme_attach_controller", 00:18:21.766 "req_id": 1 00:18:21.766 } 00:18:21.766 Got JSON-RPC error response 00:18:21.766 response: 00:18:21.766 { 00:18:21.766 "code": -5, 00:18:21.766 "message": "Input/output error" 00:18:21.766 } 00:18:21.766 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:21.766 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:21.766 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:21.766 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:21.766 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:21.766 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:21.766 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:21.767 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:21.767 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.767 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:21.767 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.767 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:21.767 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.767 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:21.767 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.767 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.767 21:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.026 request: 00:18:22.026 { 00:18:22.026 "name": "nvme0", 00:18:22.026 "trtype": "tcp", 00:18:22.026 "traddr": "10.0.0.2", 00:18:22.026 "adrfam": "ipv4", 00:18:22.026 "trsvcid": "4420", 00:18:22.026 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:22.026 "prchk_reftag": false, 00:18:22.026 "prchk_guard": false, 00:18:22.026 "hdgst": false, 00:18:22.026 "ddgst": false, 00:18:22.026 "dhchap_key": "key3", 00:18:22.026 "method": "bdev_nvme_attach_controller", 00:18:22.026 "req_id": 1 00:18:22.026 } 00:18:22.026 Got JSON-RPC error response 00:18:22.026 response: 00:18:22.026 { 00:18:22.026 "code": -5, 00:18:22.026 "message": "Input/output error" 00:18:22.026 } 00:18:22.026 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:22.026 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:22.026 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:22.026 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:22.026 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:22.026 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:22.026 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:22.026 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:22.026 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:22.026 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:22.286 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:22.546 request: 00:18:22.546 { 00:18:22.546 "name": "nvme0", 00:18:22.546 "trtype": "tcp", 00:18:22.546 "traddr": "10.0.0.2", 00:18:22.546 "adrfam": "ipv4", 00:18:22.546 "trsvcid": "4420", 00:18:22.546 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:22.546 "prchk_reftag": false, 00:18:22.546 "prchk_guard": false, 00:18:22.546 "hdgst": false, 00:18:22.546 "ddgst": false, 00:18:22.546 "dhchap_key": "key0", 00:18:22.546 "dhchap_ctrlr_key": "key1", 00:18:22.546 "method": "bdev_nvme_attach_controller", 00:18:22.546 "req_id": 1 00:18:22.546 } 00:18:22.546 Got JSON-RPC error response 00:18:22.546 response: 00:18:22.546 { 00:18:22.546 "code": -5, 00:18:22.546 "message": "Input/output error" 00:18:22.546 } 00:18:22.546 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:22.546 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:22.546 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:22.546 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:22.546 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:22.546 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:22.546 00:18:22.546 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:22.546 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:22.546 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.806 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.806 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.806 21:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.065 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:23.065 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:23.065 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3052708 00:18:23.065 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3052708 ']' 00:18:23.065 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3052708 00:18:23.065 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:23.065 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:23.065 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3052708 00:18:23.065 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:23.065 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:23.065 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3052708' 00:18:23.065 killing process with pid 3052708 00:18:23.065 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3052708 00:18:23.065 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3052708 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:23.326 rmmod nvme_tcp 00:18:23.326 rmmod nvme_fabrics 00:18:23.326 rmmod nvme_keyring 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3072959 ']' 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3072959 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3072959 ']' 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3072959 00:18:23.326 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3072959 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3072959' 00:18:23.586 killing process with pid 3072959 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3072959 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3072959 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:23.586 21:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.jay /tmp/spdk.key-sha256.GGP /tmp/spdk.key-sha384.tBG /tmp/spdk.key-sha512.1No /tmp/spdk.key-sha512.GIn /tmp/spdk.key-sha384.FeW /tmp/spdk.key-sha256.uIk '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:26.126 00:18:26.126 real 2m8.635s 00:18:26.126 user 4m56.141s 00:18:26.126 sys 0m18.328s 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.126 ************************************ 00:18:26.126 END TEST nvmf_auth_target 00:18:26.126 ************************************ 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:26.126 ************************************ 00:18:26.126 START TEST nvmf_bdevio_no_huge 00:18:26.126 ************************************ 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:26.126 * Looking for test storage... 00:18:26.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.126 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:26.127 21:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.406 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:31.407 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:31.407 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:31.407 Found net devices under 0000:86:00.0: cvl_0_0 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:31.407 Found net devices under 0000:86:00.1: cvl_0_1 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:31.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:18:31.407 00:18:31.407 --- 10.0.0.2 ping statistics --- 00:18:31.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.407 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:18:31.407 00:18:31.407 --- 10.0.0.1 ping statistics --- 00:18:31.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.407 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3077219 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3077219 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3077219 ']' 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.407 21:43:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:31.407 [2024-07-24 21:43:38.847276] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:18:31.408 [2024-07-24 21:43:38.847321] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:31.408 [2024-07-24 21:43:38.910781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:31.408 [2024-07-24 21:43:38.995166] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.408 [2024-07-24 21:43:38.995199] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.408 [2024-07-24 21:43:38.995205] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.408 [2024-07-24 21:43:38.995211] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.408 [2024-07-24 21:43:38.995216] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.408 [2024-07-24 21:43:38.995339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:31.408 [2024-07-24 21:43:38.995451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:31.408 [2024-07-24 21:43:38.995558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:31.408 [2024-07-24 21:43:38.995559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.668 [2024-07-24 21:43:39.689949] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.668 Malloc0 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.668 [2024-07-24 21:43:39.730195] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:31.668 { 00:18:31.668 "params": { 00:18:31.668 "name": "Nvme$subsystem", 00:18:31.668 "trtype": "$TEST_TRANSPORT", 00:18:31.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:31.668 "adrfam": "ipv4", 00:18:31.668 "trsvcid": "$NVMF_PORT", 00:18:31.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:31.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:31.668 "hdgst": ${hdgst:-false}, 00:18:31.668 "ddgst": ${ddgst:-false} 00:18:31.668 }, 00:18:31.668 "method": "bdev_nvme_attach_controller" 00:18:31.668 } 00:18:31.668 EOF 00:18:31.668 )") 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:31.668 21:43:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:31.669 "params": { 00:18:31.669 "name": "Nvme1", 00:18:31.669 "trtype": "tcp", 00:18:31.669 "traddr": "10.0.0.2", 00:18:31.669 "adrfam": "ipv4", 00:18:31.669 "trsvcid": "4420", 00:18:31.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.669 "hdgst": false, 00:18:31.669 "ddgst": false 00:18:31.669 }, 00:18:31.669 "method": "bdev_nvme_attach_controller" 00:18:31.669 }' 00:18:31.669 [2024-07-24 21:43:39.775848] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:18:31.669 [2024-07-24 21:43:39.775897] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3077260 ] 00:18:31.929 [2024-07-24 21:43:39.833740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:31.929 [2024-07-24 21:43:39.920603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.929 [2024-07-24 21:43:39.920698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.929 [2024-07-24 21:43:39.920699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.189 I/O targets: 00:18:32.189 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:32.189 00:18:32.189 00:18:32.189 CUnit - A unit testing framework for C - Version 2.1-3 00:18:32.189 http://cunit.sourceforge.net/ 00:18:32.189 00:18:32.189 00:18:32.189 Suite: bdevio tests on: Nvme1n1 00:18:32.189 Test: blockdev write read block ...passed 00:18:32.189 Test: blockdev write zeroes read block ...passed 00:18:32.189 Test: blockdev write zeroes read no split ...passed 00:18:32.189 Test: blockdev write zeroes read split ...passed 00:18:32.448 Test: blockdev write zeroes read split partial ...passed 00:18:32.448 Test: blockdev reset ...[2024-07-24 21:43:40.311499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:32.448 [2024-07-24 21:43:40.311569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12be300 (9): Bad file descriptor 00:18:32.448 [2024-07-24 21:43:40.342955] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:32.448 passed 00:18:32.448 Test: blockdev write read 8 blocks ...passed 00:18:32.448 Test: blockdev write read size > 128k ...passed 00:18:32.448 Test: blockdev write read invalid size ...passed 00:18:32.448 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:32.448 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:32.448 Test: blockdev write read max offset ...passed 00:18:32.448 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:32.448 Test: blockdev writev readv 8 blocks ...passed 00:18:32.448 Test: blockdev writev readv 30 x 1block ...passed 00:18:32.708 Test: blockdev writev readv block ...passed 00:18:32.708 Test: blockdev writev readv size > 128k ...passed 00:18:32.708 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:32.708 Test: blockdev comparev and writev ...[2024-07-24 21:43:40.576509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.708 [2024-07-24 21:43:40.576537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.708 [2024-07-24 21:43:40.576551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.708 [2024-07-24 21:43:40.576559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.708 [2024-07-24 21:43:40.576991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.708 [2024-07-24 21:43:40.577002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:32.708 [2024-07-24 21:43:40.577013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.708 [2024-07-24 21:43:40.577020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:32.708 [2024-07-24 21:43:40.577471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.708 [2024-07-24 21:43:40.577481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:32.708 [2024-07-24 21:43:40.577492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.708 [2024-07-24 21:43:40.577499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:32.708 [2024-07-24 21:43:40.577931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.708 [2024-07-24 21:43:40.577945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:32.708 [2024-07-24 21:43:40.577956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.708 [2024-07-24 21:43:40.577964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:32.708 passed 00:18:32.708 Test: blockdev nvme passthru rw ...passed 00:18:32.708 Test: blockdev nvme passthru vendor specific ...[2024-07-24 21:43:40.661903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.708 [2024-07-24 21:43:40.661918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:32.708 [2024-07-24 21:43:40.662282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.708 [2024-07-24 21:43:40.662292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:32.708 [2024-07-24 21:43:40.662653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.708 [2024-07-24 21:43:40.662663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:32.708 [2024-07-24 21:43:40.663018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.708 [2024-07-24 21:43:40.663028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:32.708 passed 00:18:32.708 Test: blockdev nvme admin passthru ...passed 00:18:32.708 Test: blockdev copy ...passed 00:18:32.708 00:18:32.708 Run Summary: Type Total Ran Passed Failed Inactive 00:18:32.708 suites 1 1 n/a 0 0 00:18:32.708 tests 23 23 23 0 0 00:18:32.708 asserts 152 152 152 0 n/a 00:18:32.708 00:18:32.708 Elapsed time = 1.238 seconds 00:18:32.968 21:43:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.968 21:43:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.968 21:43:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:32.968 rmmod nvme_tcp 00:18:32.968 rmmod nvme_fabrics 00:18:32.968 rmmod nvme_keyring 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3077219 ']' 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3077219 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3077219 ']' 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3077219 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.968 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3077219 00:18:33.228 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:33.228 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:33.228 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3077219' 00:18:33.228 killing process with pid 3077219 00:18:33.228 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3077219 00:18:33.228 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3077219 00:18:33.488 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:33.488 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:33.488 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:33.488 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.488 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:33.488 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.488 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.488 21:43:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.411 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:35.411 00:18:35.411 real 0m9.684s 00:18:35.411 user 0m12.818s 00:18:35.411 sys 0m4.565s 00:18:35.411 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:35.411 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.411 ************************************ 00:18:35.411 END TEST nvmf_bdevio_no_huge 00:18:35.411 ************************************ 00:18:35.411 21:43:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:35.411 21:43:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:35.411 21:43:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:35.411 21:43:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:35.671 ************************************ 00:18:35.671 START TEST nvmf_tls 00:18:35.671 ************************************ 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:35.671 * Looking for test storage... 00:18:35.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:35.671 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:35.672 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.672 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:35.672 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:35.672 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.672 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:35.672 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:35.672 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:35.672 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.672 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.672 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.672 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:35.672 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:35.672 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:35.672 21:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:40.957 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:40.957 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:40.957 Found net devices under 0000:86:00.0: cvl_0_0 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:40.957 Found net devices under 0000:86:00.1: cvl_0_1 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:40.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:18:40.957 00:18:40.957 --- 10.0.0.2 ping statistics --- 00:18:40.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.957 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:18:40.957 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:18:40.957 00:18:40.957 --- 10.0.0.1 ping statistics --- 00:18:40.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.958 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3081000 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3081000 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3081000 ']' 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.958 21:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.958 [2024-07-24 21:43:49.046970] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:18:40.958 [2024-07-24 21:43:49.047018] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.218 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.218 [2024-07-24 21:43:49.105931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.218 [2024-07-24 21:43:49.177447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.218 [2024-07-24 21:43:49.177489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.218 [2024-07-24 21:43:49.177496] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.218 [2024-07-24 21:43:49.177501] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.218 [2024-07-24 21:43:49.177506] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.218 [2024-07-24 21:43:49.177541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.788 21:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.788 21:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:41.788 21:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:41.788 21:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:41.788 21:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.788 21:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.788 21:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:41.788 21:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:42.046 true 00:18:42.046 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:42.047 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:42.306 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:42.306 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:42.306 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:42.566 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:42.566 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:42.566 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:42.566 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:42.566 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:42.827 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:42.827 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:43.087 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:43.087 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:43.087 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:43.087 21:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:43.087 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:43.087 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:43.087 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:43.347 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:43.347 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:43.347 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:43.347 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:43.347 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:43.606 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:43.606 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.NQ0zG0li2i 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.incCBiJJt0 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.NQ0zG0li2i 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.incCBiJJt0 00:18:43.865 21:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:44.125 21:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:44.384 21:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.NQ0zG0li2i 00:18:44.384 21:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.NQ0zG0li2i 00:18:44.384 21:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:44.384 [2024-07-24 21:43:52.467901] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.384 21:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:44.644 21:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:44.904 [2024-07-24 21:43:52.812786] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:44.904 [2024-07-24 21:43:52.813023] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.904 21:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:44.904 malloc0 00:18:44.904 21:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:45.164 21:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NQ0zG0li2i 00:18:45.436 [2024-07-24 21:43:53.330501] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:45.436 21:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.NQ0zG0li2i 00:18:45.436 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.429 Initializing NVMe Controllers 00:18:55.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:55.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:55.429 Initialization complete. Launching workers. 00:18:55.429 ======================================================== 00:18:55.429 Latency(us) 00:18:55.429 Device Information : IOPS MiB/s Average min max 00:18:55.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16502.36 64.46 3878.67 858.03 6780.48 00:18:55.429 ======================================================== 00:18:55.429 Total : 16502.36 64.46 3878.67 858.03 6780.48 00:18:55.429 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NQ0zG0li2i 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NQ0zG0li2i' 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3083353 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3083353 /var/tmp/bdevperf.sock 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3083353 ']' 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.429 21:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.429 [2024-07-24 21:44:03.486435] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:18:55.429 [2024-07-24 21:44:03.486485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083353 ] 00:18:55.429 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.429 [2024-07-24 21:44:03.536809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.688 [2024-07-24 21:44:03.616106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.258 21:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:56.258 21:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:56.258 21:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NQ0zG0li2i 00:18:56.517 [2024-07-24 21:44:04.455059] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:56.517 [2024-07-24 21:44:04.455144] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:56.517 TLSTESTn1 00:18:56.517 21:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:56.777 Running I/O for 10 seconds... 00:19:06.765 00:19:06.765 Latency(us) 00:19:06.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.765 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:06.765 Verification LBA range: start 0x0 length 0x2000 00:19:06.765 TLSTESTn1 : 10.08 1303.32 5.09 0.00 0.00 97906.45 7123.48 173242.99 00:19:06.765 =================================================================================================================== 00:19:06.765 Total : 1303.32 5.09 0.00 0.00 97906.45 7123.48 173242.99 00:19:06.765 0 00:19:06.765 21:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:06.765 21:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3083353 00:19:06.765 21:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3083353 ']' 00:19:06.765 21:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3083353 00:19:06.765 21:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:06.765 21:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.765 21:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3083353 00:19:06.765 21:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:06.765 21:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:06.765 21:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3083353' 00:19:06.765 killing process with pid 3083353 00:19:06.765 21:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3083353 00:19:06.765 Received shutdown signal, test time was about 10.000000 seconds 00:19:06.765 00:19:06.765 Latency(us) 00:19:06.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.765 =================================================================================================================== 00:19:06.765 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.765 [2024-07-24 21:44:14.821254] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:06.765 21:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3083353 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.incCBiJJt0 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.incCBiJJt0 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.incCBiJJt0 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.incCBiJJt0' 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3085338 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3085338 /var/tmp/bdevperf.sock 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3085338 ']' 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.026 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.026 [2024-07-24 21:44:15.051048] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:07.026 [2024-07-24 21:44:15.051095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085338 ] 00:19:07.026 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.026 [2024-07-24 21:44:15.100376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.286 [2024-07-24 21:44:15.179052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.890 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:07.890 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:07.890 21:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.incCBiJJt0 00:19:08.150 [2024-07-24 21:44:16.016628] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:08.150 [2024-07-24 21:44:16.016694] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:08.150 [2024-07-24 21:44:16.021614] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:08.150 [2024-07-24 21:44:16.022240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1511570 (107): Transport endpoint is not connected 00:19:08.150 [2024-07-24 21:44:16.023231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1511570 (9): Bad file descriptor 00:19:08.151 [2024-07-24 21:44:16.024232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:08.151 [2024-07-24 21:44:16.024249] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:08.151 [2024-07-24 21:44:16.024259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:08.151 request: 00:19:08.151 { 00:19:08.151 "name": "TLSTEST", 00:19:08.151 "trtype": "tcp", 00:19:08.151 "traddr": "10.0.0.2", 00:19:08.151 "adrfam": "ipv4", 00:19:08.151 "trsvcid": "4420", 00:19:08.151 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.151 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.151 "prchk_reftag": false, 00:19:08.151 "prchk_guard": false, 00:19:08.151 "hdgst": false, 00:19:08.151 "ddgst": false, 00:19:08.151 "psk": "/tmp/tmp.incCBiJJt0", 00:19:08.151 "method": "bdev_nvme_attach_controller", 00:19:08.151 "req_id": 1 00:19:08.151 } 00:19:08.151 Got JSON-RPC error response 00:19:08.151 response: 00:19:08.151 { 00:19:08.151 "code": -5, 00:19:08.151 "message": "Input/output error" 00:19:08.151 } 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3085338 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3085338 ']' 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3085338 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3085338 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3085338' 00:19:08.151 killing process with pid 3085338 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3085338 00:19:08.151 Received shutdown signal, test time was about 10.000000 seconds 00:19:08.151 00:19:08.151 Latency(us) 00:19:08.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.151 =================================================================================================================== 00:19:08.151 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:08.151 [2024-07-24 21:44:16.084317] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3085338 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NQ0zG0li2i 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NQ0zG0li2i 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NQ0zG0li2i 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NQ0zG0li2i' 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3085496 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:08.151 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:08.411 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3085496 /var/tmp/bdevperf.sock 00:19:08.411 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3085496 ']' 00:19:08.412 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.412 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.412 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.412 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.412 21:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.412 [2024-07-24 21:44:16.309575] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:08.412 [2024-07-24 21:44:16.309622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085496 ] 00:19:08.412 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.412 [2024-07-24 21:44:16.360520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.412 [2024-07-24 21:44:16.434487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.350 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.350 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:09.350 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.NQ0zG0li2i 00:19:09.350 [2024-07-24 21:44:17.264701] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:09.350 [2024-07-24 21:44:17.264795] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:09.350 [2024-07-24 21:44:17.272509] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:09.350 [2024-07-24 21:44:17.272530] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:09.351 [2024-07-24 21:44:17.272570] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:09.351 [2024-07-24 21:44:17.273445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a18570 (107): Transport endpoint is not connected 00:19:09.351 [2024-07-24 21:44:17.274438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a18570 (9): Bad file descriptor 00:19:09.351 [2024-07-24 21:44:17.275440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:09.351 [2024-07-24 21:44:17.275448] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:09.351 [2024-07-24 21:44:17.275457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:09.351 request: 00:19:09.351 { 00:19:09.351 "name": "TLSTEST", 00:19:09.351 "trtype": "tcp", 00:19:09.351 "traddr": "10.0.0.2", 00:19:09.351 "adrfam": "ipv4", 00:19:09.351 "trsvcid": "4420", 00:19:09.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.351 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:09.351 "prchk_reftag": false, 00:19:09.351 "prchk_guard": false, 00:19:09.351 "hdgst": false, 00:19:09.351 "ddgst": false, 00:19:09.351 "psk": "/tmp/tmp.NQ0zG0li2i", 00:19:09.351 "method": "bdev_nvme_attach_controller", 00:19:09.351 "req_id": 1 00:19:09.351 } 00:19:09.351 Got JSON-RPC error response 00:19:09.351 response: 00:19:09.351 { 00:19:09.351 "code": -5, 00:19:09.351 "message": "Input/output error" 00:19:09.351 } 00:19:09.351 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3085496 00:19:09.351 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3085496 ']' 00:19:09.351 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3085496 00:19:09.351 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:09.351 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:09.351 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3085496 00:19:09.351 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:09.351 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:09.351 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3085496' 00:19:09.351 killing process with pid 3085496 00:19:09.351 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3085496 00:19:09.351 Received shutdown signal, test time was about 10.000000 seconds 00:19:09.351 00:19:09.351 Latency(us) 00:19:09.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.351 =================================================================================================================== 00:19:09.351 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:09.351 [2024-07-24 21:44:17.337603] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:09.351 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3085496 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NQ0zG0li2i 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NQ0zG0li2i 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NQ0zG0li2i 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NQ0zG0li2i' 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3085669 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3085669 /var/tmp/bdevperf.sock 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3085669 ']' 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.611 21:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.611 [2024-07-24 21:44:17.558098] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:09.611 [2024-07-24 21:44:17.558149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085669 ] 00:19:09.611 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.611 [2024-07-24 21:44:17.609992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.611 [2024-07-24 21:44:17.681125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.549 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.549 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:10.549 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NQ0zG0li2i 00:19:10.549 [2024-07-24 21:44:18.519113] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.549 [2024-07-24 21:44:18.519188] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:10.549 [2024-07-24 21:44:18.525697] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:10.549 [2024-07-24 21:44:18.525719] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:10.549 [2024-07-24 21:44:18.525760] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:10.549 [2024-07-24 21:44:18.526814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175f570 (107): Transport endpoint is not connected 00:19:10.549 [2024-07-24 21:44:18.527807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175f570 (9): Bad file descriptor 00:19:10.549 [2024-07-24 21:44:18.528809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:10.549 [2024-07-24 21:44:18.528818] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:10.549 [2024-07-24 21:44:18.528828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:10.549 request: 00:19:10.549 { 00:19:10.549 "name": "TLSTEST", 00:19:10.549 "trtype": "tcp", 00:19:10.549 "traddr": "10.0.0.2", 00:19:10.549 "adrfam": "ipv4", 00:19:10.549 "trsvcid": "4420", 00:19:10.549 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:10.549 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:10.549 "prchk_reftag": false, 00:19:10.549 "prchk_guard": false, 00:19:10.549 "hdgst": false, 00:19:10.549 "ddgst": false, 00:19:10.549 "psk": "/tmp/tmp.NQ0zG0li2i", 00:19:10.549 "method": "bdev_nvme_attach_controller", 00:19:10.549 "req_id": 1 00:19:10.549 } 00:19:10.549 Got JSON-RPC error response 00:19:10.549 response: 00:19:10.549 { 00:19:10.549 "code": -5, 00:19:10.549 "message": "Input/output error" 00:19:10.549 } 00:19:10.549 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3085669 00:19:10.549 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3085669 ']' 00:19:10.549 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3085669 00:19:10.549 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:10.549 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:10.549 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3085669 00:19:10.549 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:10.549 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:10.549 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3085669' 00:19:10.549 killing process with pid 3085669 00:19:10.549 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3085669 00:19:10.549 Received shutdown signal, test time was about 10.000000 seconds 00:19:10.549 00:19:10.549 Latency(us) 00:19:10.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.549 =================================================================================================================== 00:19:10.549 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:10.549 [2024-07-24 21:44:18.592678] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:10.549 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3085669 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3085905 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3085905 /var/tmp/bdevperf.sock 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3085905 ']' 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.808 21:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.808 [2024-07-24 21:44:18.814731] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:10.808 [2024-07-24 21:44:18.814783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085905 ] 00:19:10.808 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.808 [2024-07-24 21:44:18.864987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.066 [2024-07-24 21:44:18.933845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.634 21:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.634 21:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:11.634 21:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:11.893 [2024-07-24 21:44:19.768888] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:11.893 [2024-07-24 21:44:19.771290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2349af0 (9): Bad file descriptor 00:19:11.894 [2024-07-24 21:44:19.772288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:11.894 [2024-07-24 21:44:19.772298] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:11.894 [2024-07-24 21:44:19.772307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:11.894 request: 00:19:11.894 { 00:19:11.894 "name": "TLSTEST", 00:19:11.894 "trtype": "tcp", 00:19:11.894 "traddr": "10.0.0.2", 00:19:11.894 "adrfam": "ipv4", 00:19:11.894 "trsvcid": "4420", 00:19:11.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:11.894 "prchk_reftag": false, 00:19:11.894 "prchk_guard": false, 00:19:11.894 "hdgst": false, 00:19:11.894 "ddgst": false, 00:19:11.894 "method": "bdev_nvme_attach_controller", 00:19:11.894 "req_id": 1 00:19:11.894 } 00:19:11.894 Got JSON-RPC error response 00:19:11.894 response: 00:19:11.894 { 00:19:11.894 "code": -5, 00:19:11.894 "message": "Input/output error" 00:19:11.894 } 00:19:11.894 21:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3085905 00:19:11.894 21:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3085905 ']' 00:19:11.894 21:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3085905 00:19:11.894 21:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:11.894 21:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:11.894 21:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3085905 00:19:11.894 21:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:11.894 21:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:11.894 21:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3085905' 00:19:11.894 killing process with pid 3085905 00:19:11.894 21:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3085905 00:19:11.894 Received shutdown signal, test time was about 10.000000 seconds 00:19:11.894 00:19:11.894 Latency(us) 00:19:11.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.894 =================================================================================================================== 00:19:11.894 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:11.894 21:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3085905 00:19:11.894 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:11.894 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:11.894 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:11.894 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:11.894 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:11.894 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 3081000 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3081000 ']' 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3081000 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3081000 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3081000' 00:19:12.153 killing process with pid 3081000 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3081000 00:19:12.153 [2024-07-24 21:44:20.058124] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3081000 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:12.153 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.I7JMZbPFGl 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.I7JMZbPFGl 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3086159 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3086159 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3086159 ']' 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.413 21:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.413 [2024-07-24 21:44:20.359755] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:12.413 [2024-07-24 21:44:20.359808] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.413 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.413 [2024-07-24 21:44:20.416627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.413 [2024-07-24 21:44:20.486407] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.413 [2024-07-24 21:44:20.486446] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.413 [2024-07-24 21:44:20.486453] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.413 [2024-07-24 21:44:20.486459] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.413 [2024-07-24 21:44:20.486464] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.413 [2024-07-24 21:44:20.486496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.354 21:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.354 21:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:13.354 21:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:13.354 21:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:13.354 21:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.354 21:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.354 21:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.I7JMZbPFGl 00:19:13.354 21:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.I7JMZbPFGl 00:19:13.354 21:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:13.354 [2024-07-24 21:44:21.358129] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.354 21:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:13.613 21:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:13.613 [2024-07-24 21:44:21.694998] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:13.613 [2024-07-24 21:44:21.695194] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.613 21:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:13.872 malloc0 00:19:13.872 21:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I7JMZbPFGl 00:19:14.131 [2024-07-24 21:44:22.212581] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I7JMZbPFGl 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.I7JMZbPFGl' 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3086624 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3086624 /var/tmp/bdevperf.sock 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3086624 ']' 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.131 21:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.391 [2024-07-24 21:44:22.276328] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:14.391 [2024-07-24 21:44:22.276376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086624 ] 00:19:14.391 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.391 [2024-07-24 21:44:22.326198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.391 [2024-07-24 21:44:22.398909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.329 21:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.329 21:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:15.329 21:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I7JMZbPFGl 00:19:15.329 [2024-07-24 21:44:23.241221] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.329 [2024-07-24 21:44:23.241296] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:15.329 TLSTESTn1 00:19:15.329 21:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:15.588 Running I/O for 10 seconds... 00:19:25.567 00:19:25.567 Latency(us) 00:19:25.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.567 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:25.567 Verification LBA range: start 0x0 length 0x2000 00:19:25.567 TLSTESTn1 : 10.10 1280.66 5.00 0.00 0.00 99562.16 6525.11 148624.25 00:19:25.567 =================================================================================================================== 00:19:25.567 Total : 1280.66 5.00 0.00 0.00 99562.16 6525.11 148624.25 00:19:25.567 0 00:19:25.567 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:25.567 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3086624 00:19:25.567 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3086624 ']' 00:19:25.567 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3086624 00:19:25.567 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:25.567 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:25.567 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3086624 00:19:25.567 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:25.567 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:25.567 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3086624' 00:19:25.567 killing process with pid 3086624 00:19:25.567 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3086624 00:19:25.567 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.567 00:19:25.567 Latency(us) 00:19:25.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.568 =================================================================================================================== 00:19:25.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.568 [2024-07-24 21:44:33.630976] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:25.568 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3086624 00:19:25.827 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.I7JMZbPFGl 00:19:25.827 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I7JMZbPFGl 00:19:25.827 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:25.827 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I7JMZbPFGl 00:19:25.827 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:25.827 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:25.827 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:25.827 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:25.827 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I7JMZbPFGl 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.I7JMZbPFGl' 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3088470 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3088470 /var/tmp/bdevperf.sock 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3088470 ']' 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.828 21:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.828 [2024-07-24 21:44:33.863083] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:25.828 [2024-07-24 21:44:33.863133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088470 ] 00:19:25.828 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.828 [2024-07-24 21:44:33.913620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.087 [2024-07-24 21:44:33.982444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.656 21:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.656 21:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:26.656 21:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I7JMZbPFGl 00:19:26.917 [2024-07-24 21:44:34.827915] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.917 [2024-07-24 21:44:34.827960] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:26.917 [2024-07-24 21:44:34.827967] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.I7JMZbPFGl 00:19:26.917 request: 00:19:26.917 { 00:19:26.917 "name": "TLSTEST", 00:19:26.917 "trtype": "tcp", 00:19:26.917 "traddr": "10.0.0.2", 00:19:26.917 "adrfam": "ipv4", 00:19:26.917 "trsvcid": "4420", 00:19:26.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.917 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.917 "prchk_reftag": false, 00:19:26.917 "prchk_guard": false, 00:19:26.917 "hdgst": false, 00:19:26.917 "ddgst": false, 00:19:26.917 "psk": "/tmp/tmp.I7JMZbPFGl", 00:19:26.917 "method": "bdev_nvme_attach_controller", 00:19:26.917 "req_id": 1 00:19:26.917 } 00:19:26.917 Got JSON-RPC error response 00:19:26.917 response: 00:19:26.917 { 00:19:26.917 "code": -1, 00:19:26.917 "message": "Operation not permitted" 00:19:26.917 } 00:19:26.917 21:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3088470 00:19:26.917 21:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3088470 ']' 00:19:26.917 21:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3088470 00:19:26.917 21:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:26.917 21:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.917 21:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3088470 00:19:26.917 21:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:26.917 21:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:26.917 21:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3088470' 00:19:26.917 killing process with pid 3088470 00:19:26.917 21:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3088470 00:19:26.917 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.917 00:19:26.917 Latency(us) 00:19:26.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.917 =================================================================================================================== 00:19:26.917 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.917 21:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3088470 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 3086159 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3086159 ']' 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3086159 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3086159 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3086159' 00:19:27.176 killing process with pid 3086159 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3086159 00:19:27.176 [2024-07-24 21:44:35.114360] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:27.176 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3086159 00:19:27.436 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:27.436 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.436 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:27.436 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.436 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3088710 00:19:27.436 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:27.436 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3088710 00:19:27.436 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3088710 ']' 00:19:27.436 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.436 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.436 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.436 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.436 21:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.436 [2024-07-24 21:44:35.363435] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:27.436 [2024-07-24 21:44:35.363480] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.436 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.436 [2024-07-24 21:44:35.419713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.436 [2024-07-24 21:44:35.487486] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.436 [2024-07-24 21:44:35.487528] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.436 [2024-07-24 21:44:35.487535] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.436 [2024-07-24 21:44:35.487541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.436 [2024-07-24 21:44:35.487546] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.436 [2024-07-24 21:44:35.487585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.I7JMZbPFGl 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.I7JMZbPFGl 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.I7JMZbPFGl 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.I7JMZbPFGl 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:28.376 [2024-07-24 21:44:36.358743] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.376 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:28.635 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:28.635 [2024-07-24 21:44:36.699612] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:28.635 [2024-07-24 21:44:36.699814] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.635 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:28.893 malloc0 00:19:28.893 21:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:29.152 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I7JMZbPFGl 00:19:29.152 [2024-07-24 21:44:37.212960] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:29.152 [2024-07-24 21:44:37.212983] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:29.152 [2024-07-24 21:44:37.213009] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:29.152 request: 00:19:29.152 { 00:19:29.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.152 "host": "nqn.2016-06.io.spdk:host1", 00:19:29.153 "psk": "/tmp/tmp.I7JMZbPFGl", 00:19:29.153 "method": "nvmf_subsystem_add_host", 00:19:29.153 "req_id": 1 00:19:29.153 } 00:19:29.153 Got JSON-RPC error response 00:19:29.153 response: 00:19:29.153 { 00:19:29.153 "code": -32603, 00:19:29.153 "message": "Internal error" 00:19:29.153 } 00:19:29.153 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:29.153 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:29.153 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:29.153 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:29.153 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 3088710 00:19:29.153 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3088710 ']' 00:19:29.153 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3088710 00:19:29.153 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:29.153 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.153 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3088710 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3088710' 00:19:29.413 killing process with pid 3088710 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3088710 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3088710 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.I7JMZbPFGl 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3089167 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3089167 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3089167 ']' 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.413 21:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.413 [2024-07-24 21:44:37.527777] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:29.413 [2024-07-24 21:44:37.527828] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.673 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.673 [2024-07-24 21:44:37.586224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.673 [2024-07-24 21:44:37.657640] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.673 [2024-07-24 21:44:37.657680] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.673 [2024-07-24 21:44:37.657687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.673 [2024-07-24 21:44:37.657692] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.673 [2024-07-24 21:44:37.657697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.673 [2024-07-24 21:44:37.657736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.242 21:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.242 21:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:30.242 21:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:30.242 21:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:30.242 21:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.501 21:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.501 21:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.I7JMZbPFGl 00:19:30.501 21:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.I7JMZbPFGl 00:19:30.501 21:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:30.501 [2024-07-24 21:44:38.512003] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.501 21:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:30.761 21:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:30.761 [2024-07-24 21:44:38.868926] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:30.761 [2024-07-24 21:44:38.869118] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.021 21:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:31.021 malloc0 00:19:31.021 21:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:31.357 21:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I7JMZbPFGl 00:19:31.357 [2024-07-24 21:44:39.378261] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:31.357 21:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:31.357 21:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3089458 00:19:31.357 21:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:31.357 21:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3089458 /var/tmp/bdevperf.sock 00:19:31.357 21:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3089458 ']' 00:19:31.357 21:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.357 21:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:31.357 21:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.357 21:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:31.357 21:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.357 [2024-07-24 21:44:39.423312] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:31.357 [2024-07-24 21:44:39.423356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089458 ] 00:19:31.617 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.617 [2024-07-24 21:44:39.471244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.617 [2024-07-24 21:44:39.546441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.186 21:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:32.186 21:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:32.186 21:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I7JMZbPFGl 00:19:32.445 [2024-07-24 21:44:40.392493] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.445 [2024-07-24 21:44:40.392584] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:32.445 TLSTESTn1 00:19:32.445 21:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:32.705 21:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:32.705 "subsystems": [ 00:19:32.705 { 00:19:32.705 "subsystem": "keyring", 00:19:32.705 "config": [] 00:19:32.705 }, 00:19:32.705 { 00:19:32.705 "subsystem": "iobuf", 00:19:32.705 "config": [ 00:19:32.705 { 00:19:32.705 "method": "iobuf_set_options", 00:19:32.705 "params": { 00:19:32.705 "small_pool_count": 8192, 00:19:32.705 "large_pool_count": 1024, 00:19:32.705 "small_bufsize": 8192, 00:19:32.705 "large_bufsize": 135168 00:19:32.705 } 00:19:32.705 } 00:19:32.705 ] 00:19:32.705 }, 00:19:32.706 { 00:19:32.706 "subsystem": "sock", 00:19:32.706 "config": [ 00:19:32.706 { 00:19:32.706 "method": "sock_set_default_impl", 00:19:32.706 "params": { 00:19:32.706 "impl_name": "posix" 00:19:32.706 } 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "method": "sock_impl_set_options", 00:19:32.706 "params": { 00:19:32.706 "impl_name": "ssl", 00:19:32.706 "recv_buf_size": 4096, 00:19:32.706 "send_buf_size": 4096, 00:19:32.706 "enable_recv_pipe": true, 00:19:32.706 "enable_quickack": false, 00:19:32.706 "enable_placement_id": 0, 00:19:32.706 "enable_zerocopy_send_server": true, 00:19:32.706 "enable_zerocopy_send_client": false, 00:19:32.706 "zerocopy_threshold": 0, 00:19:32.706 "tls_version": 0, 00:19:32.706 "enable_ktls": false 00:19:32.706 } 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "method": "sock_impl_set_options", 00:19:32.706 "params": { 00:19:32.706 "impl_name": "posix", 00:19:32.706 "recv_buf_size": 2097152, 00:19:32.706 "send_buf_size": 2097152, 00:19:32.706 "enable_recv_pipe": true, 00:19:32.706 "enable_quickack": false, 00:19:32.706 "enable_placement_id": 0, 00:19:32.706 "enable_zerocopy_send_server": true, 00:19:32.706 "enable_zerocopy_send_client": false, 00:19:32.706 "zerocopy_threshold": 0, 00:19:32.706 "tls_version": 0, 00:19:32.706 "enable_ktls": false 00:19:32.706 } 00:19:32.706 } 00:19:32.706 ] 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "subsystem": "vmd", 00:19:32.706 "config": [] 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "subsystem": "accel", 00:19:32.706 "config": [ 00:19:32.706 { 00:19:32.706 "method": "accel_set_options", 00:19:32.706 "params": { 00:19:32.706 "small_cache_size": 128, 00:19:32.706 "large_cache_size": 16, 00:19:32.706 "task_count": 2048, 00:19:32.706 "sequence_count": 2048, 00:19:32.706 "buf_count": 2048 00:19:32.706 } 00:19:32.706 } 00:19:32.706 ] 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "subsystem": "bdev", 00:19:32.706 "config": [ 00:19:32.706 { 00:19:32.706 "method": "bdev_set_options", 00:19:32.706 "params": { 00:19:32.706 "bdev_io_pool_size": 65535, 00:19:32.706 "bdev_io_cache_size": 256, 00:19:32.706 "bdev_auto_examine": true, 00:19:32.706 "iobuf_small_cache_size": 128, 00:19:32.706 "iobuf_large_cache_size": 16 00:19:32.706 } 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "method": "bdev_raid_set_options", 00:19:32.706 "params": { 00:19:32.706 "process_window_size_kb": 1024, 00:19:32.706 "process_max_bandwidth_mb_sec": 0 00:19:32.706 } 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "method": "bdev_iscsi_set_options", 00:19:32.706 "params": { 00:19:32.706 "timeout_sec": 30 00:19:32.706 } 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "method": "bdev_nvme_set_options", 00:19:32.706 "params": { 00:19:32.706 "action_on_timeout": "none", 00:19:32.706 "timeout_us": 0, 00:19:32.706 "timeout_admin_us": 0, 00:19:32.706 "keep_alive_timeout_ms": 10000, 00:19:32.706 "arbitration_burst": 0, 00:19:32.706 "low_priority_weight": 0, 00:19:32.706 "medium_priority_weight": 0, 00:19:32.706 "high_priority_weight": 0, 00:19:32.706 "nvme_adminq_poll_period_us": 10000, 00:19:32.706 "nvme_ioq_poll_period_us": 0, 00:19:32.706 "io_queue_requests": 0, 00:19:32.706 "delay_cmd_submit": true, 00:19:32.706 "transport_retry_count": 4, 00:19:32.706 "bdev_retry_count": 3, 00:19:32.706 "transport_ack_timeout": 0, 00:19:32.706 "ctrlr_loss_timeout_sec": 0, 00:19:32.706 "reconnect_delay_sec": 0, 00:19:32.706 "fast_io_fail_timeout_sec": 0, 00:19:32.706 "disable_auto_failback": false, 00:19:32.706 "generate_uuids": false, 00:19:32.706 "transport_tos": 0, 00:19:32.706 "nvme_error_stat": false, 00:19:32.706 "rdma_srq_size": 0, 00:19:32.706 "io_path_stat": false, 00:19:32.706 "allow_accel_sequence": false, 00:19:32.706 "rdma_max_cq_size": 0, 00:19:32.706 "rdma_cm_event_timeout_ms": 0, 00:19:32.706 "dhchap_digests": [ 00:19:32.706 "sha256", 00:19:32.706 "sha384", 00:19:32.706 "sha512" 00:19:32.706 ], 00:19:32.706 "dhchap_dhgroups": [ 00:19:32.706 "null", 00:19:32.706 "ffdhe2048", 00:19:32.706 "ffdhe3072", 00:19:32.706 "ffdhe4096", 00:19:32.706 "ffdhe6144", 00:19:32.706 "ffdhe8192" 00:19:32.706 ] 00:19:32.706 } 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "method": "bdev_nvme_set_hotplug", 00:19:32.706 "params": { 00:19:32.706 "period_us": 100000, 00:19:32.706 "enable": false 00:19:32.706 } 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "method": "bdev_malloc_create", 00:19:32.706 "params": { 00:19:32.706 "name": "malloc0", 00:19:32.706 "num_blocks": 8192, 00:19:32.706 "block_size": 4096, 00:19:32.706 "physical_block_size": 4096, 00:19:32.706 "uuid": "71c2beee-beee-4f9e-b60b-42eb05800bde", 00:19:32.706 "optimal_io_boundary": 0, 00:19:32.706 "md_size": 0, 00:19:32.706 "dif_type": 0, 00:19:32.706 "dif_is_head_of_md": false, 00:19:32.706 "dif_pi_format": 0 00:19:32.706 } 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "method": "bdev_wait_for_examine" 00:19:32.706 } 00:19:32.706 ] 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "subsystem": "nbd", 00:19:32.706 "config": [] 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "subsystem": "scheduler", 00:19:32.706 "config": [ 00:19:32.706 { 00:19:32.706 "method": "framework_set_scheduler", 00:19:32.706 "params": { 00:19:32.706 "name": "static" 00:19:32.706 } 00:19:32.706 } 00:19:32.706 ] 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "subsystem": "nvmf", 00:19:32.706 "config": [ 00:19:32.706 { 00:19:32.706 "method": "nvmf_set_config", 00:19:32.706 "params": { 00:19:32.706 "discovery_filter": "match_any", 00:19:32.706 "admin_cmd_passthru": { 00:19:32.706 "identify_ctrlr": false 00:19:32.706 } 00:19:32.706 } 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "method": "nvmf_set_max_subsystems", 00:19:32.706 "params": { 00:19:32.706 "max_subsystems": 1024 00:19:32.706 } 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "method": "nvmf_set_crdt", 00:19:32.706 "params": { 00:19:32.706 "crdt1": 0, 00:19:32.706 "crdt2": 0, 00:19:32.706 "crdt3": 0 00:19:32.706 } 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "method": "nvmf_create_transport", 00:19:32.706 "params": { 00:19:32.706 "trtype": "TCP", 00:19:32.706 "max_queue_depth": 128, 00:19:32.706 "max_io_qpairs_per_ctrlr": 127, 00:19:32.706 "in_capsule_data_size": 4096, 00:19:32.706 "max_io_size": 131072, 00:19:32.706 "io_unit_size": 131072, 00:19:32.706 "max_aq_depth": 128, 00:19:32.706 "num_shared_buffers": 511, 00:19:32.706 "buf_cache_size": 4294967295, 00:19:32.706 "dif_insert_or_strip": false, 00:19:32.706 "zcopy": false, 00:19:32.706 "c2h_success": false, 00:19:32.706 "sock_priority": 0, 00:19:32.706 "abort_timeout_sec": 1, 00:19:32.706 "ack_timeout": 0, 00:19:32.706 "data_wr_pool_size": 0 00:19:32.706 } 00:19:32.706 }, 00:19:32.706 { 00:19:32.706 "method": "nvmf_create_subsystem", 00:19:32.706 "params": { 00:19:32.706 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.706 "allow_any_host": false, 00:19:32.706 "serial_number": "SPDK00000000000001", 00:19:32.706 "model_number": "SPDK bdev Controller", 00:19:32.706 "max_namespaces": 10, 00:19:32.706 "min_cntlid": 1, 00:19:32.706 "max_cntlid": 65519, 00:19:32.707 "ana_reporting": false 00:19:32.707 } 00:19:32.707 }, 00:19:32.707 { 00:19:32.707 "method": "nvmf_subsystem_add_host", 00:19:32.707 "params": { 00:19:32.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.707 "host": "nqn.2016-06.io.spdk:host1", 00:19:32.707 "psk": "/tmp/tmp.I7JMZbPFGl" 00:19:32.707 } 00:19:32.707 }, 00:19:32.707 { 00:19:32.707 "method": "nvmf_subsystem_add_ns", 00:19:32.707 "params": { 00:19:32.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.707 "namespace": { 00:19:32.707 "nsid": 1, 00:19:32.707 "bdev_name": "malloc0", 00:19:32.707 "nguid": "71C2BEEEBEEE4F9EB60B42EB05800BDE", 00:19:32.707 "uuid": "71c2beee-beee-4f9e-b60b-42eb05800bde", 00:19:32.707 "no_auto_visible": false 00:19:32.707 } 00:19:32.707 } 00:19:32.707 }, 00:19:32.707 { 00:19:32.707 "method": "nvmf_subsystem_add_listener", 00:19:32.707 "params": { 00:19:32.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.707 "listen_address": { 00:19:32.707 "trtype": "TCP", 00:19:32.707 "adrfam": "IPv4", 00:19:32.707 "traddr": "10.0.0.2", 00:19:32.707 "trsvcid": "4420" 00:19:32.707 }, 00:19:32.707 "secure_channel": true 00:19:32.707 } 00:19:32.707 } 00:19:32.707 ] 00:19:32.707 } 00:19:32.707 ] 00:19:32.707 }' 00:19:32.707 21:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:32.967 21:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:32.967 "subsystems": [ 00:19:32.967 { 00:19:32.967 "subsystem": "keyring", 00:19:32.967 "config": [] 00:19:32.967 }, 00:19:32.967 { 00:19:32.967 "subsystem": "iobuf", 00:19:32.967 "config": [ 00:19:32.967 { 00:19:32.967 "method": "iobuf_set_options", 00:19:32.967 "params": { 00:19:32.967 "small_pool_count": 8192, 00:19:32.967 "large_pool_count": 1024, 00:19:32.967 "small_bufsize": 8192, 00:19:32.967 "large_bufsize": 135168 00:19:32.967 } 00:19:32.967 } 00:19:32.967 ] 00:19:32.967 }, 00:19:32.967 { 00:19:32.967 "subsystem": "sock", 00:19:32.967 "config": [ 00:19:32.967 { 00:19:32.967 "method": "sock_set_default_impl", 00:19:32.967 "params": { 00:19:32.967 "impl_name": "posix" 00:19:32.967 } 00:19:32.967 }, 00:19:32.967 { 00:19:32.967 "method": "sock_impl_set_options", 00:19:32.967 "params": { 00:19:32.967 "impl_name": "ssl", 00:19:32.967 "recv_buf_size": 4096, 00:19:32.967 "send_buf_size": 4096, 00:19:32.967 "enable_recv_pipe": true, 00:19:32.967 "enable_quickack": false, 00:19:32.967 "enable_placement_id": 0, 00:19:32.967 "enable_zerocopy_send_server": true, 00:19:32.967 "enable_zerocopy_send_client": false, 00:19:32.967 "zerocopy_threshold": 0, 00:19:32.967 "tls_version": 0, 00:19:32.967 "enable_ktls": false 00:19:32.967 } 00:19:32.967 }, 00:19:32.967 { 00:19:32.967 "method": "sock_impl_set_options", 00:19:32.967 "params": { 00:19:32.967 "impl_name": "posix", 00:19:32.967 "recv_buf_size": 2097152, 00:19:32.967 "send_buf_size": 2097152, 00:19:32.967 "enable_recv_pipe": true, 00:19:32.967 "enable_quickack": false, 00:19:32.967 "enable_placement_id": 0, 00:19:32.967 "enable_zerocopy_send_server": true, 00:19:32.967 "enable_zerocopy_send_client": false, 00:19:32.967 "zerocopy_threshold": 0, 00:19:32.967 "tls_version": 0, 00:19:32.967 "enable_ktls": false 00:19:32.967 } 00:19:32.967 } 00:19:32.967 ] 00:19:32.967 }, 00:19:32.967 { 00:19:32.967 "subsystem": "vmd", 00:19:32.967 "config": [] 00:19:32.967 }, 00:19:32.967 { 00:19:32.967 "subsystem": "accel", 00:19:32.967 "config": [ 00:19:32.967 { 00:19:32.967 "method": "accel_set_options", 00:19:32.967 "params": { 00:19:32.967 "small_cache_size": 128, 00:19:32.967 "large_cache_size": 16, 00:19:32.967 "task_count": 2048, 00:19:32.967 "sequence_count": 2048, 00:19:32.967 "buf_count": 2048 00:19:32.967 } 00:19:32.967 } 00:19:32.967 ] 00:19:32.967 }, 00:19:32.967 { 00:19:32.967 "subsystem": "bdev", 00:19:32.967 "config": [ 00:19:32.967 { 00:19:32.967 "method": "bdev_set_options", 00:19:32.967 "params": { 00:19:32.967 "bdev_io_pool_size": 65535, 00:19:32.967 "bdev_io_cache_size": 256, 00:19:32.967 "bdev_auto_examine": true, 00:19:32.967 "iobuf_small_cache_size": 128, 00:19:32.967 "iobuf_large_cache_size": 16 00:19:32.967 } 00:19:32.967 }, 00:19:32.967 { 00:19:32.967 "method": "bdev_raid_set_options", 00:19:32.967 "params": { 00:19:32.967 "process_window_size_kb": 1024, 00:19:32.967 "process_max_bandwidth_mb_sec": 0 00:19:32.967 } 00:19:32.967 }, 00:19:32.967 { 00:19:32.967 "method": "bdev_iscsi_set_options", 00:19:32.967 "params": { 00:19:32.967 "timeout_sec": 30 00:19:32.967 } 00:19:32.967 }, 00:19:32.967 { 00:19:32.967 "method": "bdev_nvme_set_options", 00:19:32.967 "params": { 00:19:32.967 "action_on_timeout": "none", 00:19:32.967 "timeout_us": 0, 00:19:32.967 "timeout_admin_us": 0, 00:19:32.967 "keep_alive_timeout_ms": 10000, 00:19:32.967 "arbitration_burst": 0, 00:19:32.967 "low_priority_weight": 0, 00:19:32.967 "medium_priority_weight": 0, 00:19:32.967 "high_priority_weight": 0, 00:19:32.967 "nvme_adminq_poll_period_us": 10000, 00:19:32.967 "nvme_ioq_poll_period_us": 0, 00:19:32.967 "io_queue_requests": 512, 00:19:32.967 "delay_cmd_submit": true, 00:19:32.967 "transport_retry_count": 4, 00:19:32.967 "bdev_retry_count": 3, 00:19:32.967 "transport_ack_timeout": 0, 00:19:32.967 "ctrlr_loss_timeout_sec": 0, 00:19:32.967 "reconnect_delay_sec": 0, 00:19:32.967 "fast_io_fail_timeout_sec": 0, 00:19:32.967 "disable_auto_failback": false, 00:19:32.967 "generate_uuids": false, 00:19:32.967 "transport_tos": 0, 00:19:32.967 "nvme_error_stat": false, 00:19:32.967 "rdma_srq_size": 0, 00:19:32.967 "io_path_stat": false, 00:19:32.967 "allow_accel_sequence": false, 00:19:32.967 "rdma_max_cq_size": 0, 00:19:32.967 "rdma_cm_event_timeout_ms": 0, 00:19:32.967 "dhchap_digests": [ 00:19:32.967 "sha256", 00:19:32.967 "sha384", 00:19:32.967 "sha512" 00:19:32.967 ], 00:19:32.967 "dhchap_dhgroups": [ 00:19:32.967 "null", 00:19:32.967 "ffdhe2048", 00:19:32.967 "ffdhe3072", 00:19:32.967 "ffdhe4096", 00:19:32.967 "ffdhe6144", 00:19:32.967 "ffdhe8192" 00:19:32.967 ] 00:19:32.967 } 00:19:32.967 }, 00:19:32.967 { 00:19:32.967 "method": "bdev_nvme_attach_controller", 00:19:32.967 "params": { 00:19:32.967 "name": "TLSTEST", 00:19:32.967 "trtype": "TCP", 00:19:32.967 "adrfam": "IPv4", 00:19:32.967 "traddr": "10.0.0.2", 00:19:32.967 "trsvcid": "4420", 00:19:32.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.967 "prchk_reftag": false, 00:19:32.967 "prchk_guard": false, 00:19:32.967 "ctrlr_loss_timeout_sec": 0, 00:19:32.967 "reconnect_delay_sec": 0, 00:19:32.967 "fast_io_fail_timeout_sec": 0, 00:19:32.967 "psk": "/tmp/tmp.I7JMZbPFGl", 00:19:32.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:32.967 "hdgst": false, 00:19:32.967 "ddgst": false 00:19:32.967 } 00:19:32.967 }, 00:19:32.967 { 00:19:32.967 "method": "bdev_nvme_set_hotplug", 00:19:32.967 "params": { 00:19:32.967 "period_us": 100000, 00:19:32.967 "enable": false 00:19:32.967 } 00:19:32.967 }, 00:19:32.967 { 00:19:32.967 "method": "bdev_wait_for_examine" 00:19:32.967 } 00:19:32.967 ] 00:19:32.967 }, 00:19:32.967 { 00:19:32.967 "subsystem": "nbd", 00:19:32.967 "config": [] 00:19:32.967 } 00:19:32.967 ] 00:19:32.967 }' 00:19:32.968 21:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 3089458 00:19:32.968 21:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3089458 ']' 00:19:32.968 21:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3089458 00:19:32.968 21:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:32.968 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.968 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3089458 00:19:32.968 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:32.968 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:32.968 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3089458' 00:19:32.968 killing process with pid 3089458 00:19:32.968 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3089458 00:19:32.968 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.968 00:19:32.968 Latency(us) 00:19:32.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.968 =================================================================================================================== 00:19:32.968 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:32.968 [2024-07-24 21:44:41.032725] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:32.968 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3089458 00:19:33.229 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 3089167 00:19:33.229 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3089167 ']' 00:19:33.229 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3089167 00:19:33.229 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:33.229 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.229 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3089167 00:19:33.229 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:33.229 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:33.229 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3089167' 00:19:33.229 killing process with pid 3089167 00:19:33.229 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3089167 00:19:33.229 [2024-07-24 21:44:41.257073] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:33.229 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3089167 00:19:33.489 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:33.489 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:33.489 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:33.489 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:33.489 "subsystems": [ 00:19:33.489 { 00:19:33.489 "subsystem": "keyring", 00:19:33.489 "config": [] 00:19:33.489 }, 00:19:33.489 { 00:19:33.489 "subsystem": "iobuf", 00:19:33.489 "config": [ 00:19:33.489 { 00:19:33.489 "method": "iobuf_set_options", 00:19:33.489 "params": { 00:19:33.489 "small_pool_count": 8192, 00:19:33.489 "large_pool_count": 1024, 00:19:33.490 "small_bufsize": 8192, 00:19:33.490 "large_bufsize": 135168 00:19:33.490 } 00:19:33.490 } 00:19:33.490 ] 00:19:33.490 }, 00:19:33.490 { 00:19:33.490 "subsystem": "sock", 00:19:33.490 "config": [ 00:19:33.490 { 00:19:33.490 "method": "sock_set_default_impl", 00:19:33.490 "params": { 00:19:33.490 "impl_name": "posix" 00:19:33.490 } 00:19:33.490 }, 00:19:33.490 { 00:19:33.490 "method": "sock_impl_set_options", 00:19:33.490 "params": { 00:19:33.490 "impl_name": "ssl", 00:19:33.490 "recv_buf_size": 4096, 00:19:33.490 "send_buf_size": 4096, 00:19:33.490 "enable_recv_pipe": true, 00:19:33.490 "enable_quickack": false, 00:19:33.490 "enable_placement_id": 0, 00:19:33.490 "enable_zerocopy_send_server": true, 00:19:33.490 "enable_zerocopy_send_client": false, 00:19:33.490 "zerocopy_threshold": 0, 00:19:33.490 "tls_version": 0, 00:19:33.490 "enable_ktls": false 00:19:33.490 } 00:19:33.490 }, 00:19:33.490 { 00:19:33.490 "method": "sock_impl_set_options", 00:19:33.490 "params": { 00:19:33.490 "impl_name": "posix", 00:19:33.490 "recv_buf_size": 2097152, 00:19:33.490 "send_buf_size": 2097152, 00:19:33.490 "enable_recv_pipe": true, 00:19:33.490 "enable_quickack": false, 00:19:33.490 "enable_placement_id": 0, 00:19:33.490 "enable_zerocopy_send_server": true, 00:19:33.490 "enable_zerocopy_send_client": false, 00:19:33.490 "zerocopy_threshold": 0, 00:19:33.490 "tls_version": 0, 00:19:33.490 "enable_ktls": false 00:19:33.490 } 00:19:33.490 } 00:19:33.490 ] 00:19:33.490 }, 00:19:33.490 { 00:19:33.490 "subsystem": "vmd", 00:19:33.490 "config": [] 00:19:33.490 }, 00:19:33.490 { 00:19:33.490 "subsystem": "accel", 00:19:33.490 "config": [ 00:19:33.490 { 00:19:33.490 "method": "accel_set_options", 00:19:33.490 "params": { 00:19:33.490 "small_cache_size": 128, 00:19:33.490 "large_cache_size": 16, 00:19:33.490 "task_count": 2048, 00:19:33.490 "sequence_count": 2048, 00:19:33.490 "buf_count": 2048 00:19:33.490 } 00:19:33.490 } 00:19:33.490 ] 00:19:33.490 }, 00:19:33.490 { 00:19:33.490 "subsystem": "bdev", 00:19:33.490 "config": [ 00:19:33.490 { 00:19:33.490 "method": "bdev_set_options", 00:19:33.490 "params": { 00:19:33.490 "bdev_io_pool_size": 65535, 00:19:33.490 "bdev_io_cache_size": 256, 00:19:33.490 "bdev_auto_examine": true, 00:19:33.490 "iobuf_small_cache_size": 128, 00:19:33.490 "iobuf_large_cache_size": 16 00:19:33.490 } 00:19:33.490 }, 00:19:33.490 { 00:19:33.490 "method": "bdev_raid_set_options", 00:19:33.490 "params": { 00:19:33.490 "process_window_size_kb": 1024, 00:19:33.490 "process_max_bandwidth_mb_sec": 0 00:19:33.490 } 00:19:33.490 }, 00:19:33.490 { 00:19:33.490 "method": "bdev_iscsi_set_options", 00:19:33.490 "params": { 00:19:33.490 "timeout_sec": 30 00:19:33.490 } 00:19:33.490 }, 00:19:33.490 { 00:19:33.490 "method": "bdev_nvme_set_options", 00:19:33.490 "params": { 00:19:33.490 "action_on_timeout": "none", 00:19:33.490 "timeout_us": 0, 00:19:33.490 "timeout_admin_us": 0, 00:19:33.490 "keep_alive_timeout_ms": 10000, 00:19:33.490 "arbitration_burst": 0, 00:19:33.490 "low_priority_weight": 0, 00:19:33.490 "medium_priority_weight": 0, 00:19:33.490 "high_priority_weight": 0, 00:19:33.490 "nvme_adminq_poll_period_us": 10000, 00:19:33.490 "nvme_ioq_poll_period_us": 0, 00:19:33.490 "io_queue_requests": 0, 00:19:33.490 "delay_cmd_submit": true, 00:19:33.490 "transport_retry_count": 4, 00:19:33.490 "bdev_retry_count": 3, 00:19:33.490 "transport_ack_timeout": 0, 00:19:33.490 "ctrlr_loss_timeout_sec": 0, 00:19:33.490 "reconnect_delay_sec": 0, 00:19:33.490 "fast_io_fail_timeout_sec": 0, 00:19:33.490 "disable_auto_failback": false, 00:19:33.490 "generate_uuids": false, 00:19:33.490 "transport_tos": 0, 00:19:33.490 "nvme_error_stat": false, 00:19:33.490 "rdma_srq_size": 0, 00:19:33.490 "io_path_stat": false, 00:19:33.490 "allow_accel_sequence": false, 00:19:33.490 "rdma_max_cq_size": 0, 00:19:33.490 "rdma_cm_event_timeout_ms": 0, 00:19:33.490 "dhchap_digests": [ 00:19:33.490 "sha256", 00:19:33.490 "sha384", 00:19:33.490 "sha512" 00:19:33.490 ], 00:19:33.490 "dhchap_dhgroups": [ 00:19:33.490 "null", 00:19:33.490 "ffdhe2048", 00:19:33.490 "ffdhe3072", 00:19:33.490 "ffdhe4096", 00:19:33.490 "ffdhe6144", 00:19:33.490 "ffdhe8192" 00:19:33.490 ] 00:19:33.490 } 00:19:33.490 }, 00:19:33.490 { 00:19:33.491 "method": "bdev_nvme_set_hotplug", 00:19:33.491 "params": { 00:19:33.491 "period_us": 100000, 00:19:33.491 "enable": false 00:19:33.491 } 00:19:33.491 }, 00:19:33.491 { 00:19:33.491 "method": "bdev_malloc_create", 00:19:33.491 "params": { 00:19:33.491 "name": "malloc0", 00:19:33.491 "num_blocks": 8192, 00:19:33.491 "block_size": 4096, 00:19:33.491 "physical_block_size": 4096, 00:19:33.491 "uuid": "71c2beee-beee-4f9e-b60b-42eb05800bde", 00:19:33.491 "optimal_io_boundary": 0, 00:19:33.491 "md_size": 0, 00:19:33.491 "dif_type": 0, 00:19:33.491 "dif_is_head_of_md": false, 00:19:33.491 "dif_pi_format": 0 00:19:33.491 } 00:19:33.491 }, 00:19:33.491 { 00:19:33.491 "method": "bdev_wait_for_examine" 00:19:33.491 } 00:19:33.491 ] 00:19:33.491 }, 00:19:33.491 { 00:19:33.491 "subsystem": "nbd", 00:19:33.491 "config": [] 00:19:33.491 }, 00:19:33.491 { 00:19:33.491 "subsystem": "scheduler", 00:19:33.491 "config": [ 00:19:33.491 { 00:19:33.491 "method": "framework_set_scheduler", 00:19:33.491 "params": { 00:19:33.491 "name": "static" 00:19:33.491 } 00:19:33.491 } 00:19:33.491 ] 00:19:33.491 }, 00:19:33.491 { 00:19:33.491 "subsystem": "nvmf", 00:19:33.491 "config": [ 00:19:33.491 { 00:19:33.491 "method": "nvmf_set_config", 00:19:33.491 "params": { 00:19:33.491 "discovery_filter": "match_any", 00:19:33.491 "admin_cmd_passthru": { 00:19:33.491 "identify_ctrlr": false 00:19:33.491 } 00:19:33.491 } 00:19:33.491 }, 00:19:33.491 { 00:19:33.491 "method": "nvmf_set_max_subsystems", 00:19:33.491 "params": { 00:19:33.491 "max_subsystems": 1024 00:19:33.491 } 00:19:33.491 }, 00:19:33.491 { 00:19:33.491 "method": "nvmf_set_crdt", 00:19:33.491 "params": { 00:19:33.491 "crdt1": 0, 00:19:33.491 "crdt2": 0, 00:19:33.491 "crdt3": 0 00:19:33.491 } 00:19:33.491 }, 00:19:33.491 { 00:19:33.491 "method": "nvmf_create_transport", 00:19:33.491 "params": { 00:19:33.491 "trtype": "TCP", 00:19:33.491 "max_queue_depth": 128, 00:19:33.491 "max_io_qpairs_per_ctrlr": 127, 00:19:33.491 "in_capsule_data_size": 4096, 00:19:33.491 "max_io_size": 131072, 00:19:33.491 "io_unit_size": 131072, 00:19:33.491 "max_aq_depth": 128, 00:19:33.491 "num_shared_buffers": 511, 00:19:33.491 "buf_cache_size": 4294967295, 00:19:33.491 "dif_insert_or_strip": false, 00:19:33.491 "zcopy": false, 00:19:33.491 "c2h_success": false, 00:19:33.491 "sock_priority": 0, 00:19:33.491 "abort_timeout_sec": 1, 00:19:33.491 "ack_timeout": 0, 00:19:33.491 "data_wr_pool_size": 0 00:19:33.491 } 00:19:33.491 }, 00:19:33.491 { 00:19:33.491 "method": "nvmf_create_subsystem", 00:19:33.491 "params": { 00:19:33.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.491 "allow_any_host": false, 00:19:33.491 "serial_number": "SPDK00000000000001", 00:19:33.491 "model_number": "SPDK bdev Controller", 00:19:33.491 "max_namespaces": 10, 00:19:33.491 "min_cntlid": 1, 00:19:33.491 "max_cntlid": 65519, 00:19:33.491 "ana_reporting": false 00:19:33.491 } 00:19:33.491 }, 00:19:33.491 { 00:19:33.491 "method": "nvmf_subsystem_add_host", 00:19:33.491 "params": { 00:19:33.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.491 "host": "nqn.2016-06.io.spdk:host1", 00:19:33.491 "psk": "/tmp/tmp.I7JMZbPFGl" 00:19:33.491 } 00:19:33.491 }, 00:19:33.491 { 00:19:33.491 "method": "nvmf_subsystem_add_ns", 00:19:33.491 "params": { 00:19:33.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.491 "namespace": { 00:19:33.491 "nsid": 1, 00:19:33.491 "bdev_name": "malloc0", 00:19:33.491 "nguid": "71C2BEEEBEEE4F9EB60B42EB05800BDE", 00:19:33.491 "uuid": "71c2beee-beee-4f9e-b60b-42eb05800bde", 00:19:33.491 "no_auto_visible": false 00:19:33.491 } 00:19:33.491 } 00:19:33.491 }, 00:19:33.491 { 00:19:33.491 "method": "nvmf_subsystem_add_listener", 00:19:33.491 "params": { 00:19:33.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.491 "listen_address": { 00:19:33.491 "trtype": "TCP", 00:19:33.491 "adrfam": "IPv4", 00:19:33.491 "traddr": "10.0.0.2", 00:19:33.491 "trsvcid": "4420" 00:19:33.491 }, 00:19:33.491 "secure_channel": true 00:19:33.491 } 00:19:33.491 } 00:19:33.491 ] 00:19:33.491 } 00:19:33.491 ] 00:19:33.491 }' 00:19:33.491 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.491 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3089759 00:19:33.491 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3089759 00:19:33.492 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:33.492 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3089759 ']' 00:19:33.492 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.492 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.492 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.492 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.492 21:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.492 [2024-07-24 21:44:41.503377] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:33.492 [2024-07-24 21:44:41.503421] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.492 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.492 [2024-07-24 21:44:41.559246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.752 [2024-07-24 21:44:41.639916] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.752 [2024-07-24 21:44:41.639949] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.752 [2024-07-24 21:44:41.639956] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.752 [2024-07-24 21:44:41.639962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.752 [2024-07-24 21:44:41.639967] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.752 [2024-07-24 21:44:41.640010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.752 [2024-07-24 21:44:41.842231] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.752 [2024-07-24 21:44:41.866966] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:34.012 [2024-07-24 21:44:41.883022] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:34.012 [2024-07-24 21:44:41.883220] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.273 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.273 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:34.273 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:34.273 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:34.273 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.273 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.273 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3089959 00:19:34.273 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3089959 /var/tmp/bdevperf.sock 00:19:34.273 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3089959 ']' 00:19:34.273 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.273 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:34.273 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.273 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.273 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:34.273 "subsystems": [ 00:19:34.273 { 00:19:34.273 "subsystem": "keyring", 00:19:34.273 "config": [] 00:19:34.273 }, 00:19:34.273 { 00:19:34.273 "subsystem": "iobuf", 00:19:34.273 "config": [ 00:19:34.273 { 00:19:34.273 "method": "iobuf_set_options", 00:19:34.273 "params": { 00:19:34.273 "small_pool_count": 8192, 00:19:34.273 "large_pool_count": 1024, 00:19:34.273 "small_bufsize": 8192, 00:19:34.273 "large_bufsize": 135168 00:19:34.273 } 00:19:34.273 } 00:19:34.273 ] 00:19:34.273 }, 00:19:34.274 { 00:19:34.274 "subsystem": "sock", 00:19:34.274 "config": [ 00:19:34.274 { 00:19:34.274 "method": "sock_set_default_impl", 00:19:34.274 "params": { 00:19:34.274 "impl_name": "posix" 00:19:34.274 } 00:19:34.274 }, 00:19:34.274 { 00:19:34.274 "method": "sock_impl_set_options", 00:19:34.274 "params": { 00:19:34.274 "impl_name": "ssl", 00:19:34.274 "recv_buf_size": 4096, 00:19:34.274 "send_buf_size": 4096, 00:19:34.274 "enable_recv_pipe": true, 00:19:34.274 "enable_quickack": false, 00:19:34.274 "enable_placement_id": 0, 00:19:34.274 "enable_zerocopy_send_server": true, 00:19:34.274 "enable_zerocopy_send_client": false, 00:19:34.274 "zerocopy_threshold": 0, 00:19:34.274 "tls_version": 0, 00:19:34.274 "enable_ktls": false 00:19:34.274 } 00:19:34.274 }, 00:19:34.274 { 00:19:34.274 "method": "sock_impl_set_options", 00:19:34.274 "params": { 00:19:34.274 "impl_name": "posix", 00:19:34.274 "recv_buf_size": 2097152, 00:19:34.274 "send_buf_size": 2097152, 00:19:34.274 "enable_recv_pipe": true, 00:19:34.274 "enable_quickack": false, 00:19:34.274 "enable_placement_id": 0, 00:19:34.274 "enable_zerocopy_send_server": true, 00:19:34.274 "enable_zerocopy_send_client": false, 00:19:34.274 "zerocopy_threshold": 0, 00:19:34.274 "tls_version": 0, 00:19:34.274 "enable_ktls": false 00:19:34.274 } 00:19:34.274 } 00:19:34.274 ] 00:19:34.274 }, 00:19:34.274 { 00:19:34.274 "subsystem": "vmd", 00:19:34.274 "config": [] 00:19:34.274 }, 00:19:34.274 { 00:19:34.274 "subsystem": "accel", 00:19:34.274 "config": [ 00:19:34.274 { 00:19:34.274 "method": "accel_set_options", 00:19:34.274 "params": { 00:19:34.274 "small_cache_size": 128, 00:19:34.274 "large_cache_size": 16, 00:19:34.274 "task_count": 2048, 00:19:34.274 "sequence_count": 2048, 00:19:34.274 "buf_count": 2048 00:19:34.274 } 00:19:34.274 } 00:19:34.274 ] 00:19:34.274 }, 00:19:34.274 { 00:19:34.274 "subsystem": "bdev", 00:19:34.274 "config": [ 00:19:34.274 { 00:19:34.274 "method": "bdev_set_options", 00:19:34.274 "params": { 00:19:34.274 "bdev_io_pool_size": 65535, 00:19:34.274 "bdev_io_cache_size": 256, 00:19:34.274 "bdev_auto_examine": true, 00:19:34.274 "iobuf_small_cache_size": 128, 00:19:34.274 "iobuf_large_cache_size": 16 00:19:34.274 } 00:19:34.274 }, 00:19:34.274 { 00:19:34.274 "method": "bdev_raid_set_options", 00:19:34.274 "params": { 00:19:34.274 "process_window_size_kb": 1024, 00:19:34.274 "process_max_bandwidth_mb_sec": 0 00:19:34.274 } 00:19:34.274 }, 00:19:34.274 { 00:19:34.274 "method": "bdev_iscsi_set_options", 00:19:34.274 "params": { 00:19:34.274 "timeout_sec": 30 00:19:34.274 } 00:19:34.274 }, 00:19:34.274 { 00:19:34.274 "method": "bdev_nvme_set_options", 00:19:34.274 "params": { 00:19:34.274 "action_on_timeout": "none", 00:19:34.274 "timeout_us": 0, 00:19:34.274 "timeout_admin_us": 0, 00:19:34.274 "keep_alive_timeout_ms": 10000, 00:19:34.274 "arbitration_burst": 0, 00:19:34.274 "low_priority_weight": 0, 00:19:34.274 "medium_priority_weight": 0, 00:19:34.274 "high_priority_weight": 0, 00:19:34.274 "nvme_adminq_poll_period_us": 10000, 00:19:34.274 "nvme_ioq_poll_period_us": 0, 00:19:34.274 "io_queue_requests": 512, 00:19:34.274 "delay_cmd_submit": true, 00:19:34.274 "transport_retry_count": 4, 00:19:34.274 "bdev_retry_count": 3, 00:19:34.274 "transport_ack_timeout": 0, 00:19:34.274 "ctrlr_loss_timeout_sec": 0, 00:19:34.274 "reconnect_delay_sec": 0, 00:19:34.274 "fast_io_fail_timeout_sec": 0, 00:19:34.274 "disable_auto_failback": false, 00:19:34.274 "generate_uuids": false, 00:19:34.274 "transport_tos": 0, 00:19:34.274 "nvme_error_stat": false, 00:19:34.274 "rdma_srq_size": 0, 00:19:34.274 "io_path_stat": false, 00:19:34.274 "allow_accel_sequence": false, 00:19:34.274 "rdma_max_cq_size": 0, 00:19:34.274 "rdma_cm_event_timeout_ms": 0, 00:19:34.274 "dhchap_digests": [ 00:19:34.274 "sha256", 00:19:34.274 "sha384", 00:19:34.274 "sha512" 00:19:34.274 ], 00:19:34.274 "dhchap_dhgroups": [ 00:19:34.274 "null", 00:19:34.274 "ffdhe2048", 00:19:34.274 "ffdhe3072", 00:19:34.274 "ffdhe4096", 00:19:34.274 "ffdhe6144", 00:19:34.274 "ffdhe8192" 00:19:34.274 ] 00:19:34.274 } 00:19:34.274 }, 00:19:34.274 { 00:19:34.274 "method": "bdev_nvme_attach_controller", 00:19:34.274 "params": { 00:19:34.274 "name": "TLSTEST", 00:19:34.274 "trtype": "TCP", 00:19:34.274 "adrfam": "IPv4", 00:19:34.274 "traddr": "10.0.0.2", 00:19:34.274 "trsvcid": "4420", 00:19:34.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.274 "prchk_reftag": false, 00:19:34.274 "prchk_guard": false, 00:19:34.274 "ctrlr_loss_timeout_sec": 0, 00:19:34.274 "reconnect_delay_sec": 0, 00:19:34.274 "fast_io_fail_timeout_sec": 0, 00:19:34.274 "psk": "/tmp/tmp.I7JMZbPFGl", 00:19:34.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.274 "hdgst": false, 00:19:34.274 "ddgst": false 00:19:34.274 } 00:19:34.274 }, 00:19:34.274 { 00:19:34.274 "method": "bdev_nvme_set_hotplug", 00:19:34.274 "params": { 00:19:34.274 "period_us": 100000, 00:19:34.274 "enable": false 00:19:34.274 } 00:19:34.274 }, 00:19:34.274 { 00:19:34.274 "method": "bdev_wait_for_examine" 00:19:34.274 } 00:19:34.274 ] 00:19:34.274 }, 00:19:34.274 { 00:19:34.274 "subsystem": "nbd", 00:19:34.274 "config": [] 00:19:34.274 } 00:19:34.274 ] 00:19:34.274 }' 00:19:34.274 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.274 21:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.274 [2024-07-24 21:44:42.384995] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:34.274 [2024-07-24 21:44:42.385048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089959 ] 00:19:34.535 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.535 [2024-07-24 21:44:42.435122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.535 [2024-07-24 21:44:42.508396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.794 [2024-07-24 21:44:42.651190] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.794 [2024-07-24 21:44:42.651272] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:35.364 21:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.364 21:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:35.364 21:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:35.364 Running I/O for 10 seconds... 00:19:45.345 00:19:45.345 Latency(us) 00:19:45.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.345 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:45.345 Verification LBA range: start 0x0 length 0x2000 00:19:45.345 TLSTESTn1 : 10.07 1319.64 5.15 0.00 0.00 96703.60 7123.48 141329.81 00:19:45.345 =================================================================================================================== 00:19:45.345 Total : 1319.64 5.15 0.00 0.00 96703.60 7123.48 141329.81 00:19:45.345 0 00:19:45.345 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:45.345 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 3089959 00:19:45.345 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3089959 ']' 00:19:45.345 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3089959 00:19:45.345 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:45.345 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.345 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3089959 00:19:45.345 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:45.345 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:45.345 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3089959' 00:19:45.345 killing process with pid 3089959 00:19:45.345 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3089959 00:19:45.345 Received shutdown signal, test time was about 10.000000 seconds 00:19:45.345 00:19:45.345 Latency(us) 00:19:45.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.345 =================================================================================================================== 00:19:45.345 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.345 [2024-07-24 21:44:53.420803] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:45.345 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3089959 00:19:45.606 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 3089759 00:19:45.606 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3089759 ']' 00:19:45.606 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3089759 00:19:45.606 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:45.606 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.606 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3089759 00:19:45.606 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:45.606 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:45.606 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3089759' 00:19:45.606 killing process with pid 3089759 00:19:45.606 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3089759 00:19:45.606 [2024-07-24 21:44:53.650877] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:45.606 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3089759 00:19:45.866 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:45.866 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.866 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.866 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.866 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3091807 00:19:45.866 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3091807 00:19:45.866 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:45.866 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3091807 ']' 00:19:45.866 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.866 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.866 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.866 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.866 21:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.866 [2024-07-24 21:44:53.900692] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:45.866 [2024-07-24 21:44:53.900737] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.866 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.866 [2024-07-24 21:44:53.956882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.126 [2024-07-24 21:44:54.027568] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.126 [2024-07-24 21:44:54.027608] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.126 [2024-07-24 21:44:54.027614] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.126 [2024-07-24 21:44:54.027620] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.126 [2024-07-24 21:44:54.027625] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.126 [2024-07-24 21:44:54.027644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.696 21:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.696 21:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:46.696 21:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.696 21:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.696 21:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.696 21:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.696 21:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.I7JMZbPFGl 00:19:46.696 21:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.I7JMZbPFGl 00:19:46.696 21:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:46.956 [2024-07-24 21:44:54.883719] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.956 21:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:47.216 21:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:47.216 [2024-07-24 21:44:55.232627] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.216 [2024-07-24 21:44:55.232846] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.216 21:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:47.477 malloc0 00:19:47.477 21:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:47.737 21:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I7JMZbPFGl 00:19:47.737 [2024-07-24 21:44:55.758227] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:47.737 21:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3092224 00:19:47.737 21:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:47.737 21:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:47.737 21:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3092224 /var/tmp/bdevperf.sock 00:19:47.737 21:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3092224 ']' 00:19:47.737 21:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.737 21:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.737 21:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.737 21:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.737 21:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.737 [2024-07-24 21:44:55.816630] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:47.737 [2024-07-24 21:44:55.816674] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092224 ] 00:19:47.737 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.997 [2024-07-24 21:44:55.870330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.997 [2024-07-24 21:44:55.949344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.566 21:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.566 21:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:48.566 21:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.I7JMZbPFGl 00:19:48.826 21:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:48.826 [2024-07-24 21:44:56.929068] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.085 nvme0n1 00:19:49.085 21:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:49.085 Running I/O for 1 seconds... 00:19:50.466 00:19:50.466 Latency(us) 00:19:50.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.466 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.466 Verification LBA range: start 0x0 length 0x2000 00:19:50.466 nvme0n1 : 1.06 1037.74 4.05 0.00 0.00 120461.15 7237.45 157742.30 00:19:50.466 =================================================================================================================== 00:19:50.466 Total : 1037.74 4.05 0.00 0.00 120461.15 7237.45 157742.30 00:19:50.466 0 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 3092224 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3092224 ']' 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3092224 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3092224 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3092224' 00:19:50.466 killing process with pid 3092224 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3092224 00:19:50.466 Received shutdown signal, test time was about 1.000000 seconds 00:19:50.466 00:19:50.466 Latency(us) 00:19:50.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.466 =================================================================================================================== 00:19:50.466 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3092224 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 3091807 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3091807 ']' 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3091807 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3091807 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3091807' 00:19:50.466 killing process with pid 3091807 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3091807 00:19:50.466 [2024-07-24 21:44:58.495197] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:50.466 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3091807 00:19:50.726 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:50.726 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:50.726 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:50.726 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.726 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3092750 00:19:50.726 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3092750 00:19:50.726 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:50.727 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3092750 ']' 00:19:50.727 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.727 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.727 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.727 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.727 21:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.727 [2024-07-24 21:44:58.738430] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:50.727 [2024-07-24 21:44:58.738475] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.727 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.727 [2024-07-24 21:44:58.794117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.003 [2024-07-24 21:44:58.873517] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.003 [2024-07-24 21:44:58.873561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.003 [2024-07-24 21:44:58.873567] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.003 [2024-07-24 21:44:58.873573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.003 [2024-07-24 21:44:58.873578] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.003 [2024-07-24 21:44:58.873611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.573 [2024-07-24 21:44:59.588719] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.573 malloc0 00:19:51.573 [2024-07-24 21:44:59.616863] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:51.573 [2024-07-24 21:44:59.624371] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3092782 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3092782 /var/tmp/bdevperf.sock 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3092782 ']' 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.573 21:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.861 [2024-07-24 21:44:59.695019] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:51.861 [2024-07-24 21:44:59.695064] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092782 ] 00:19:51.861 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.861 [2024-07-24 21:44:59.749179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.861 [2024-07-24 21:44:59.827766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.430 21:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.430 21:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:52.430 21:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.I7JMZbPFGl 00:19:52.690 21:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:52.963 [2024-07-24 21:45:00.819324] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.963 nvme0n1 00:19:52.963 21:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:52.963 Running I/O for 1 seconds... 00:19:54.349 00:19:54.349 Latency(us) 00:19:54.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.349 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:54.349 Verification LBA range: start 0x0 length 0x2000 00:19:54.349 nvme0n1 : 1.08 1209.40 4.72 0.00 0.00 102814.04 7151.97 139506.20 00:19:54.349 =================================================================================================================== 00:19:54.349 Total : 1209.40 4.72 0.00 0.00 102814.04 7151.97 139506.20 00:19:54.349 0 00:19:54.349 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:54.349 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.349 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.349 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.349 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:54.349 "subsystems": [ 00:19:54.349 { 00:19:54.349 "subsystem": "keyring", 00:19:54.349 "config": [ 00:19:54.349 { 00:19:54.349 "method": "keyring_file_add_key", 00:19:54.349 "params": { 00:19:54.349 "name": "key0", 00:19:54.349 "path": "/tmp/tmp.I7JMZbPFGl" 00:19:54.349 } 00:19:54.349 } 00:19:54.349 ] 00:19:54.349 }, 00:19:54.349 { 00:19:54.349 "subsystem": "iobuf", 00:19:54.349 "config": [ 00:19:54.349 { 00:19:54.349 "method": "iobuf_set_options", 00:19:54.349 "params": { 00:19:54.349 "small_pool_count": 8192, 00:19:54.349 "large_pool_count": 1024, 00:19:54.349 "small_bufsize": 8192, 00:19:54.349 "large_bufsize": 135168 00:19:54.349 } 00:19:54.349 } 00:19:54.349 ] 00:19:54.349 }, 00:19:54.349 { 00:19:54.349 "subsystem": "sock", 00:19:54.349 "config": [ 00:19:54.349 { 00:19:54.349 "method": "sock_set_default_impl", 00:19:54.349 "params": { 00:19:54.349 "impl_name": "posix" 00:19:54.349 } 00:19:54.349 }, 00:19:54.349 { 00:19:54.349 "method": "sock_impl_set_options", 00:19:54.349 "params": { 00:19:54.349 "impl_name": "ssl", 00:19:54.349 "recv_buf_size": 4096, 00:19:54.349 "send_buf_size": 4096, 00:19:54.349 "enable_recv_pipe": true, 00:19:54.349 "enable_quickack": false, 00:19:54.349 "enable_placement_id": 0, 00:19:54.349 "enable_zerocopy_send_server": true, 00:19:54.349 "enable_zerocopy_send_client": false, 00:19:54.349 "zerocopy_threshold": 0, 00:19:54.349 "tls_version": 0, 00:19:54.349 "enable_ktls": false 00:19:54.349 } 00:19:54.349 }, 00:19:54.349 { 00:19:54.349 "method": "sock_impl_set_options", 00:19:54.349 "params": { 00:19:54.349 "impl_name": "posix", 00:19:54.349 "recv_buf_size": 2097152, 00:19:54.349 "send_buf_size": 2097152, 00:19:54.349 "enable_recv_pipe": true, 00:19:54.349 "enable_quickack": false, 00:19:54.349 "enable_placement_id": 0, 00:19:54.349 "enable_zerocopy_send_server": true, 00:19:54.349 "enable_zerocopy_send_client": false, 00:19:54.349 "zerocopy_threshold": 0, 00:19:54.349 "tls_version": 0, 00:19:54.349 "enable_ktls": false 00:19:54.349 } 00:19:54.349 } 00:19:54.349 ] 00:19:54.349 }, 00:19:54.349 { 00:19:54.349 "subsystem": "vmd", 00:19:54.349 "config": [] 00:19:54.349 }, 00:19:54.349 { 00:19:54.349 "subsystem": "accel", 00:19:54.349 "config": [ 00:19:54.349 { 00:19:54.349 "method": "accel_set_options", 00:19:54.349 "params": { 00:19:54.349 "small_cache_size": 128, 00:19:54.349 "large_cache_size": 16, 00:19:54.349 "task_count": 2048, 00:19:54.349 "sequence_count": 2048, 00:19:54.349 "buf_count": 2048 00:19:54.349 } 00:19:54.349 } 00:19:54.349 ] 00:19:54.349 }, 00:19:54.349 { 00:19:54.349 "subsystem": "bdev", 00:19:54.349 "config": [ 00:19:54.349 { 00:19:54.349 "method": "bdev_set_options", 00:19:54.349 "params": { 00:19:54.349 "bdev_io_pool_size": 65535, 00:19:54.349 "bdev_io_cache_size": 256, 00:19:54.349 "bdev_auto_examine": true, 00:19:54.349 "iobuf_small_cache_size": 128, 00:19:54.349 "iobuf_large_cache_size": 16 00:19:54.349 } 00:19:54.349 }, 00:19:54.349 { 00:19:54.349 "method": "bdev_raid_set_options", 00:19:54.349 "params": { 00:19:54.349 "process_window_size_kb": 1024, 00:19:54.349 "process_max_bandwidth_mb_sec": 0 00:19:54.349 } 00:19:54.349 }, 00:19:54.349 { 00:19:54.349 "method": "bdev_iscsi_set_options", 00:19:54.349 "params": { 00:19:54.349 "timeout_sec": 30 00:19:54.349 } 00:19:54.349 }, 00:19:54.349 { 00:19:54.349 "method": "bdev_nvme_set_options", 00:19:54.349 "params": { 00:19:54.349 "action_on_timeout": "none", 00:19:54.349 "timeout_us": 0, 00:19:54.349 "timeout_admin_us": 0, 00:19:54.349 "keep_alive_timeout_ms": 10000, 00:19:54.349 "arbitration_burst": 0, 00:19:54.350 "low_priority_weight": 0, 00:19:54.350 "medium_priority_weight": 0, 00:19:54.350 "high_priority_weight": 0, 00:19:54.350 "nvme_adminq_poll_period_us": 10000, 00:19:54.350 "nvme_ioq_poll_period_us": 0, 00:19:54.350 "io_queue_requests": 0, 00:19:54.350 "delay_cmd_submit": true, 00:19:54.350 "transport_retry_count": 4, 00:19:54.350 "bdev_retry_count": 3, 00:19:54.350 "transport_ack_timeout": 0, 00:19:54.350 "ctrlr_loss_timeout_sec": 0, 00:19:54.350 "reconnect_delay_sec": 0, 00:19:54.350 "fast_io_fail_timeout_sec": 0, 00:19:54.350 "disable_auto_failback": false, 00:19:54.350 "generate_uuids": false, 00:19:54.350 "transport_tos": 0, 00:19:54.350 "nvme_error_stat": false, 00:19:54.350 "rdma_srq_size": 0, 00:19:54.350 "io_path_stat": false, 00:19:54.350 "allow_accel_sequence": false, 00:19:54.350 "rdma_max_cq_size": 0, 00:19:54.350 "rdma_cm_event_timeout_ms": 0, 00:19:54.350 "dhchap_digests": [ 00:19:54.350 "sha256", 00:19:54.350 "sha384", 00:19:54.350 "sha512" 00:19:54.350 ], 00:19:54.350 "dhchap_dhgroups": [ 00:19:54.350 "null", 00:19:54.350 "ffdhe2048", 00:19:54.350 "ffdhe3072", 00:19:54.350 "ffdhe4096", 00:19:54.350 "ffdhe6144", 00:19:54.350 "ffdhe8192" 00:19:54.350 ] 00:19:54.350 } 00:19:54.350 }, 00:19:54.350 { 00:19:54.350 "method": "bdev_nvme_set_hotplug", 00:19:54.350 "params": { 00:19:54.350 "period_us": 100000, 00:19:54.350 "enable": false 00:19:54.350 } 00:19:54.350 }, 00:19:54.350 { 00:19:54.350 "method": "bdev_malloc_create", 00:19:54.350 "params": { 00:19:54.350 "name": "malloc0", 00:19:54.350 "num_blocks": 8192, 00:19:54.350 "block_size": 4096, 00:19:54.350 "physical_block_size": 4096, 00:19:54.350 "uuid": "8f6b4cd4-d0b7-4b53-8c0c-6f1a3859dc17", 00:19:54.350 "optimal_io_boundary": 0, 00:19:54.350 "md_size": 0, 00:19:54.350 "dif_type": 0, 00:19:54.350 "dif_is_head_of_md": false, 00:19:54.350 "dif_pi_format": 0 00:19:54.350 } 00:19:54.350 }, 00:19:54.350 { 00:19:54.350 "method": "bdev_wait_for_examine" 00:19:54.350 } 00:19:54.350 ] 00:19:54.350 }, 00:19:54.350 { 00:19:54.350 "subsystem": "nbd", 00:19:54.350 "config": [] 00:19:54.350 }, 00:19:54.350 { 00:19:54.350 "subsystem": "scheduler", 00:19:54.350 "config": [ 00:19:54.350 { 00:19:54.350 "method": "framework_set_scheduler", 00:19:54.350 "params": { 00:19:54.350 "name": "static" 00:19:54.350 } 00:19:54.350 } 00:19:54.350 ] 00:19:54.350 }, 00:19:54.350 { 00:19:54.350 "subsystem": "nvmf", 00:19:54.350 "config": [ 00:19:54.350 { 00:19:54.350 "method": "nvmf_set_config", 00:19:54.350 "params": { 00:19:54.350 "discovery_filter": "match_any", 00:19:54.350 "admin_cmd_passthru": { 00:19:54.350 "identify_ctrlr": false 00:19:54.350 } 00:19:54.350 } 00:19:54.350 }, 00:19:54.350 { 00:19:54.350 "method": "nvmf_set_max_subsystems", 00:19:54.350 "params": { 00:19:54.350 "max_subsystems": 1024 00:19:54.350 } 00:19:54.350 }, 00:19:54.350 { 00:19:54.350 "method": "nvmf_set_crdt", 00:19:54.350 "params": { 00:19:54.350 "crdt1": 0, 00:19:54.350 "crdt2": 0, 00:19:54.350 "crdt3": 0 00:19:54.350 } 00:19:54.350 }, 00:19:54.350 { 00:19:54.350 "method": "nvmf_create_transport", 00:19:54.350 "params": { 00:19:54.350 "trtype": "TCP", 00:19:54.350 "max_queue_depth": 128, 00:19:54.350 "max_io_qpairs_per_ctrlr": 127, 00:19:54.350 "in_capsule_data_size": 4096, 00:19:54.350 "max_io_size": 131072, 00:19:54.350 "io_unit_size": 131072, 00:19:54.350 "max_aq_depth": 128, 00:19:54.350 "num_shared_buffers": 511, 00:19:54.350 "buf_cache_size": 4294967295, 00:19:54.350 "dif_insert_or_strip": false, 00:19:54.350 "zcopy": false, 00:19:54.350 "c2h_success": false, 00:19:54.350 "sock_priority": 0, 00:19:54.350 "abort_timeout_sec": 1, 00:19:54.350 "ack_timeout": 0, 00:19:54.350 "data_wr_pool_size": 0 00:19:54.350 } 00:19:54.350 }, 00:19:54.350 { 00:19:54.350 "method": "nvmf_create_subsystem", 00:19:54.350 "params": { 00:19:54.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.350 "allow_any_host": false, 00:19:54.350 "serial_number": "00000000000000000000", 00:19:54.350 "model_number": "SPDK bdev Controller", 00:19:54.350 "max_namespaces": 32, 00:19:54.350 "min_cntlid": 1, 00:19:54.350 "max_cntlid": 65519, 00:19:54.350 "ana_reporting": false 00:19:54.350 } 00:19:54.350 }, 00:19:54.350 { 00:19:54.350 "method": "nvmf_subsystem_add_host", 00:19:54.350 "params": { 00:19:54.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.350 "host": "nqn.2016-06.io.spdk:host1", 00:19:54.350 "psk": "key0" 00:19:54.350 } 00:19:54.350 }, 00:19:54.350 { 00:19:54.350 "method": "nvmf_subsystem_add_ns", 00:19:54.350 "params": { 00:19:54.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.350 "namespace": { 00:19:54.350 "nsid": 1, 00:19:54.350 "bdev_name": "malloc0", 00:19:54.350 "nguid": "8F6B4CD4D0B74B538C0C6F1A3859DC17", 00:19:54.350 "uuid": "8f6b4cd4-d0b7-4b53-8c0c-6f1a3859dc17", 00:19:54.350 "no_auto_visible": false 00:19:54.350 } 00:19:54.350 } 00:19:54.350 }, 00:19:54.350 { 00:19:54.350 "method": "nvmf_subsystem_add_listener", 00:19:54.350 "params": { 00:19:54.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.350 "listen_address": { 00:19:54.350 "trtype": "TCP", 00:19:54.350 "adrfam": "IPv4", 00:19:54.350 "traddr": "10.0.0.2", 00:19:54.350 "trsvcid": "4420" 00:19:54.350 }, 00:19:54.350 "secure_channel": false, 00:19:54.350 "sock_impl": "ssl" 00:19:54.350 } 00:19:54.350 } 00:19:54.350 ] 00:19:54.350 } 00:19:54.350 ] 00:19:54.350 }' 00:19:54.350 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:54.611 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:54.611 "subsystems": [ 00:19:54.611 { 00:19:54.611 "subsystem": "keyring", 00:19:54.611 "config": [ 00:19:54.611 { 00:19:54.611 "method": "keyring_file_add_key", 00:19:54.611 "params": { 00:19:54.611 "name": "key0", 00:19:54.611 "path": "/tmp/tmp.I7JMZbPFGl" 00:19:54.611 } 00:19:54.611 } 00:19:54.611 ] 00:19:54.611 }, 00:19:54.611 { 00:19:54.611 "subsystem": "iobuf", 00:19:54.611 "config": [ 00:19:54.611 { 00:19:54.611 "method": "iobuf_set_options", 00:19:54.611 "params": { 00:19:54.611 "small_pool_count": 8192, 00:19:54.611 "large_pool_count": 1024, 00:19:54.611 "small_bufsize": 8192, 00:19:54.611 "large_bufsize": 135168 00:19:54.611 } 00:19:54.611 } 00:19:54.611 ] 00:19:54.611 }, 00:19:54.611 { 00:19:54.611 "subsystem": "sock", 00:19:54.611 "config": [ 00:19:54.611 { 00:19:54.611 "method": "sock_set_default_impl", 00:19:54.611 "params": { 00:19:54.611 "impl_name": "posix" 00:19:54.611 } 00:19:54.611 }, 00:19:54.611 { 00:19:54.611 "method": "sock_impl_set_options", 00:19:54.611 "params": { 00:19:54.611 "impl_name": "ssl", 00:19:54.611 "recv_buf_size": 4096, 00:19:54.611 "send_buf_size": 4096, 00:19:54.611 "enable_recv_pipe": true, 00:19:54.611 "enable_quickack": false, 00:19:54.611 "enable_placement_id": 0, 00:19:54.611 "enable_zerocopy_send_server": true, 00:19:54.611 "enable_zerocopy_send_client": false, 00:19:54.611 "zerocopy_threshold": 0, 00:19:54.611 "tls_version": 0, 00:19:54.611 "enable_ktls": false 00:19:54.611 } 00:19:54.611 }, 00:19:54.611 { 00:19:54.611 "method": "sock_impl_set_options", 00:19:54.611 "params": { 00:19:54.611 "impl_name": "posix", 00:19:54.611 "recv_buf_size": 2097152, 00:19:54.611 "send_buf_size": 2097152, 00:19:54.611 "enable_recv_pipe": true, 00:19:54.611 "enable_quickack": false, 00:19:54.611 "enable_placement_id": 0, 00:19:54.611 "enable_zerocopy_send_server": true, 00:19:54.611 "enable_zerocopy_send_client": false, 00:19:54.611 "zerocopy_threshold": 0, 00:19:54.611 "tls_version": 0, 00:19:54.611 "enable_ktls": false 00:19:54.611 } 00:19:54.611 } 00:19:54.611 ] 00:19:54.611 }, 00:19:54.611 { 00:19:54.611 "subsystem": "vmd", 00:19:54.611 "config": [] 00:19:54.611 }, 00:19:54.611 { 00:19:54.611 "subsystem": "accel", 00:19:54.611 "config": [ 00:19:54.611 { 00:19:54.611 "method": "accel_set_options", 00:19:54.611 "params": { 00:19:54.611 "small_cache_size": 128, 00:19:54.611 "large_cache_size": 16, 00:19:54.611 "task_count": 2048, 00:19:54.611 "sequence_count": 2048, 00:19:54.611 "buf_count": 2048 00:19:54.611 } 00:19:54.611 } 00:19:54.611 ] 00:19:54.611 }, 00:19:54.611 { 00:19:54.611 "subsystem": "bdev", 00:19:54.611 "config": [ 00:19:54.611 { 00:19:54.611 "method": "bdev_set_options", 00:19:54.611 "params": { 00:19:54.611 "bdev_io_pool_size": 65535, 00:19:54.611 "bdev_io_cache_size": 256, 00:19:54.611 "bdev_auto_examine": true, 00:19:54.611 "iobuf_small_cache_size": 128, 00:19:54.611 "iobuf_large_cache_size": 16 00:19:54.611 } 00:19:54.611 }, 00:19:54.611 { 00:19:54.611 "method": "bdev_raid_set_options", 00:19:54.611 "params": { 00:19:54.611 "process_window_size_kb": 1024, 00:19:54.611 "process_max_bandwidth_mb_sec": 0 00:19:54.611 } 00:19:54.611 }, 00:19:54.611 { 00:19:54.611 "method": "bdev_iscsi_set_options", 00:19:54.611 "params": { 00:19:54.611 "timeout_sec": 30 00:19:54.611 } 00:19:54.611 }, 00:19:54.611 { 00:19:54.611 "method": "bdev_nvme_set_options", 00:19:54.611 "params": { 00:19:54.611 "action_on_timeout": "none", 00:19:54.611 "timeout_us": 0, 00:19:54.611 "timeout_admin_us": 0, 00:19:54.611 "keep_alive_timeout_ms": 10000, 00:19:54.611 "arbitration_burst": 0, 00:19:54.611 "low_priority_weight": 0, 00:19:54.611 "medium_priority_weight": 0, 00:19:54.611 "high_priority_weight": 0, 00:19:54.611 "nvme_adminq_poll_period_us": 10000, 00:19:54.611 "nvme_ioq_poll_period_us": 0, 00:19:54.611 "io_queue_requests": 512, 00:19:54.611 "delay_cmd_submit": true, 00:19:54.611 "transport_retry_count": 4, 00:19:54.611 "bdev_retry_count": 3, 00:19:54.611 "transport_ack_timeout": 0, 00:19:54.611 "ctrlr_loss_timeout_sec": 0, 00:19:54.611 "reconnect_delay_sec": 0, 00:19:54.611 "fast_io_fail_timeout_sec": 0, 00:19:54.611 "disable_auto_failback": false, 00:19:54.611 "generate_uuids": false, 00:19:54.611 "transport_tos": 0, 00:19:54.611 "nvme_error_stat": false, 00:19:54.611 "rdma_srq_size": 0, 00:19:54.611 "io_path_stat": false, 00:19:54.611 "allow_accel_sequence": false, 00:19:54.611 "rdma_max_cq_size": 0, 00:19:54.611 "rdma_cm_event_timeout_ms": 0, 00:19:54.611 "dhchap_digests": [ 00:19:54.611 "sha256", 00:19:54.611 "sha384", 00:19:54.611 "sha512" 00:19:54.611 ], 00:19:54.611 "dhchap_dhgroups": [ 00:19:54.611 "null", 00:19:54.611 "ffdhe2048", 00:19:54.611 "ffdhe3072", 00:19:54.611 "ffdhe4096", 00:19:54.611 "ffdhe6144", 00:19:54.611 "ffdhe8192" 00:19:54.611 ] 00:19:54.611 } 00:19:54.611 }, 00:19:54.611 { 00:19:54.611 "method": "bdev_nvme_attach_controller", 00:19:54.611 "params": { 00:19:54.611 "name": "nvme0", 00:19:54.611 "trtype": "TCP", 00:19:54.611 "adrfam": "IPv4", 00:19:54.611 "traddr": "10.0.0.2", 00:19:54.611 "trsvcid": "4420", 00:19:54.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.611 "prchk_reftag": false, 00:19:54.611 "prchk_guard": false, 00:19:54.612 "ctrlr_loss_timeout_sec": 0, 00:19:54.612 "reconnect_delay_sec": 0, 00:19:54.612 "fast_io_fail_timeout_sec": 0, 00:19:54.612 "psk": "key0", 00:19:54.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.612 "hdgst": false, 00:19:54.612 "ddgst": false 00:19:54.612 } 00:19:54.612 }, 00:19:54.612 { 00:19:54.612 "method": "bdev_nvme_set_hotplug", 00:19:54.612 "params": { 00:19:54.612 "period_us": 100000, 00:19:54.612 "enable": false 00:19:54.612 } 00:19:54.612 }, 00:19:54.612 { 00:19:54.612 "method": "bdev_enable_histogram", 00:19:54.612 "params": { 00:19:54.612 "name": "nvme0n1", 00:19:54.612 "enable": true 00:19:54.612 } 00:19:54.612 }, 00:19:54.612 { 00:19:54.612 "method": "bdev_wait_for_examine" 00:19:54.612 } 00:19:54.612 ] 00:19:54.612 }, 00:19:54.612 { 00:19:54.612 "subsystem": "nbd", 00:19:54.612 "config": [] 00:19:54.612 } 00:19:54.612 ] 00:19:54.612 }' 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 3092782 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3092782 ']' 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3092782 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3092782 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3092782' 00:19:54.612 killing process with pid 3092782 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3092782 00:19:54.612 Received shutdown signal, test time was about 1.000000 seconds 00:19:54.612 00:19:54.612 Latency(us) 00:19:54.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.612 =================================================================================================================== 00:19:54.612 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3092782 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 3092750 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3092750 ']' 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3092750 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.612 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3092750 00:19:54.872 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:54.872 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:54.872 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3092750' 00:19:54.872 killing process with pid 3092750 00:19:54.872 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3092750 00:19:54.872 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3092750 00:19:54.872 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:54.872 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:54.872 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:54.872 "subsystems": [ 00:19:54.872 { 00:19:54.872 "subsystem": "keyring", 00:19:54.872 "config": [ 00:19:54.872 { 00:19:54.872 "method": "keyring_file_add_key", 00:19:54.872 "params": { 00:19:54.872 "name": "key0", 00:19:54.872 "path": "/tmp/tmp.I7JMZbPFGl" 00:19:54.872 } 00:19:54.872 } 00:19:54.872 ] 00:19:54.872 }, 00:19:54.872 { 00:19:54.872 "subsystem": "iobuf", 00:19:54.872 "config": [ 00:19:54.872 { 00:19:54.872 "method": "iobuf_set_options", 00:19:54.872 "params": { 00:19:54.872 "small_pool_count": 8192, 00:19:54.872 "large_pool_count": 1024, 00:19:54.872 "small_bufsize": 8192, 00:19:54.872 "large_bufsize": 135168 00:19:54.872 } 00:19:54.872 } 00:19:54.872 ] 00:19:54.872 }, 00:19:54.872 { 00:19:54.872 "subsystem": "sock", 00:19:54.872 "config": [ 00:19:54.872 { 00:19:54.872 "method": "sock_set_default_impl", 00:19:54.872 "params": { 00:19:54.872 "impl_name": "posix" 00:19:54.872 } 00:19:54.872 }, 00:19:54.872 { 00:19:54.872 "method": "sock_impl_set_options", 00:19:54.872 "params": { 00:19:54.872 "impl_name": "ssl", 00:19:54.872 "recv_buf_size": 4096, 00:19:54.872 "send_buf_size": 4096, 00:19:54.872 "enable_recv_pipe": true, 00:19:54.872 "enable_quickack": false, 00:19:54.872 "enable_placement_id": 0, 00:19:54.872 "enable_zerocopy_send_server": true, 00:19:54.872 "enable_zerocopy_send_client": false, 00:19:54.872 "zerocopy_threshold": 0, 00:19:54.872 "tls_version": 0, 00:19:54.872 "enable_ktls": false 00:19:54.872 } 00:19:54.872 }, 00:19:54.872 { 00:19:54.872 "method": "sock_impl_set_options", 00:19:54.872 "params": { 00:19:54.872 "impl_name": "posix", 00:19:54.872 "recv_buf_size": 2097152, 00:19:54.872 "send_buf_size": 2097152, 00:19:54.872 "enable_recv_pipe": true, 00:19:54.872 "enable_quickack": false, 00:19:54.872 "enable_placement_id": 0, 00:19:54.872 "enable_zerocopy_send_server": true, 00:19:54.872 "enable_zerocopy_send_client": false, 00:19:54.872 "zerocopy_threshold": 0, 00:19:54.872 "tls_version": 0, 00:19:54.872 "enable_ktls": false 00:19:54.872 } 00:19:54.872 } 00:19:54.872 ] 00:19:54.872 }, 00:19:54.872 { 00:19:54.872 "subsystem": "vmd", 00:19:54.872 "config": [] 00:19:54.872 }, 00:19:54.872 { 00:19:54.872 "subsystem": "accel", 00:19:54.873 "config": [ 00:19:54.873 { 00:19:54.873 "method": "accel_set_options", 00:19:54.873 "params": { 00:19:54.873 "small_cache_size": 128, 00:19:54.873 "large_cache_size": 16, 00:19:54.873 "task_count": 2048, 00:19:54.873 "sequence_count": 2048, 00:19:54.873 "buf_count": 2048 00:19:54.873 } 00:19:54.873 } 00:19:54.873 ] 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "subsystem": "bdev", 00:19:54.873 "config": [ 00:19:54.873 { 00:19:54.873 "method": "bdev_set_options", 00:19:54.873 "params": { 00:19:54.873 "bdev_io_pool_size": 65535, 00:19:54.873 "bdev_io_cache_size": 256, 00:19:54.873 "bdev_auto_examine": true, 00:19:54.873 "iobuf_small_cache_size": 128, 00:19:54.873 "iobuf_large_cache_size": 16 00:19:54.873 } 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "method": "bdev_raid_set_options", 00:19:54.873 "params": { 00:19:54.873 "process_window_size_kb": 1024, 00:19:54.873 "process_max_bandwidth_mb_sec": 0 00:19:54.873 } 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "method": "bdev_iscsi_set_options", 00:19:54.873 "params": { 00:19:54.873 "timeout_sec": 30 00:19:54.873 } 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "method": "bdev_nvme_set_options", 00:19:54.873 "params": { 00:19:54.873 "action_on_timeout": "none", 00:19:54.873 "timeout_us": 0, 00:19:54.873 "timeout_admin_us": 0, 00:19:54.873 "keep_alive_timeout_ms": 10000, 00:19:54.873 "arbitration_burst": 0, 00:19:54.873 "low_priority_weight": 0, 00:19:54.873 "medium_priority_weight": 0, 00:19:54.873 "high_priority_weight": 0, 00:19:54.873 "nvme_adminq_poll_period_us": 10000, 00:19:54.873 "nvme_ioq_poll_period_us": 0, 00:19:54.873 "io_queue_requests": 0, 00:19:54.873 "delay_cmd_submit": true, 00:19:54.873 "transport_retry_count": 4, 00:19:54.873 "bdev_retry_count": 3, 00:19:54.873 "transport_ack_timeout": 0, 00:19:54.873 "ctrlr_loss_timeout_sec": 0, 00:19:54.873 "reconnect_delay_sec": 0, 00:19:54.873 "fast_io_fail_timeout_sec": 0, 00:19:54.873 "disable_auto_failback": false, 00:19:54.873 "generate_uuids": false, 00:19:54.873 "transport_tos": 0, 00:19:54.873 "nvme_error_stat": false, 00:19:54.873 "rdma_srq_size": 0, 00:19:54.873 "io_path_stat": false, 00:19:54.873 "allow_accel_sequence": false, 00:19:54.873 "rdma_max_cq_size": 0, 00:19:54.873 "rdma_cm_event_timeout_ms": 0, 00:19:54.873 "dhchap_digests": [ 00:19:54.873 "sha256", 00:19:54.873 "sha384", 00:19:54.873 "sha512" 00:19:54.873 ], 00:19:54.873 "dhchap_dhgroups": [ 00:19:54.873 "null", 00:19:54.873 "ffdhe2048", 00:19:54.873 "ffdhe3072", 00:19:54.873 "ffdhe4096", 00:19:54.873 "ffdhe6144", 00:19:54.873 "ffdhe8192" 00:19:54.873 ] 00:19:54.873 } 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "method": "bdev_nvme_set_hotplug", 00:19:54.873 "params": { 00:19:54.873 "period_us": 100000, 00:19:54.873 "enable": false 00:19:54.873 } 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "method": "bdev_malloc_create", 00:19:54.873 "params": { 00:19:54.873 "name": "malloc0", 00:19:54.873 "num_blocks": 8192, 00:19:54.873 "block_size": 4096, 00:19:54.873 "physical_block_size": 4096, 00:19:54.873 "uuid": "8f6b4cd4-d0b7-4b53-8c0c-6f1a3859dc17", 00:19:54.873 "optimal_io_boundary": 0, 00:19:54.873 "md_size": 0, 00:19:54.873 "dif_type": 0, 00:19:54.873 "dif_is_head_of_md": false, 00:19:54.873 "dif_pi_format": 0 00:19:54.873 } 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "method": "bdev_wait_for_examine" 00:19:54.873 } 00:19:54.873 ] 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "subsystem": "nbd", 00:19:54.873 "config": [] 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "subsystem": "scheduler", 00:19:54.873 "config": [ 00:19:54.873 { 00:19:54.873 "method": "framework_set_scheduler", 00:19:54.873 "params": { 00:19:54.873 "name": "static" 00:19:54.873 } 00:19:54.873 } 00:19:54.873 ] 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "subsystem": "nvmf", 00:19:54.873 "config": [ 00:19:54.873 { 00:19:54.873 "method": "nvmf_set_config", 00:19:54.873 "params": { 00:19:54.873 "discovery_filter": "match_any", 00:19:54.873 "admin_cmd_passthru": { 00:19:54.873 "identify_ctrlr": false 00:19:54.873 } 00:19:54.873 } 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "method": "nvmf_set_max_subsystems", 00:19:54.873 "params": { 00:19:54.873 "max_subsystems": 1024 00:19:54.873 } 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "method": "nvmf_set_crdt", 00:19:54.873 "params": { 00:19:54.873 "crdt1": 0, 00:19:54.873 "crdt2": 0, 00:19:54.873 "crdt3": 0 00:19:54.873 } 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "method": "nvmf_create_transport", 00:19:54.873 "params": { 00:19:54.873 "trtype": "TCP", 00:19:54.873 "max_queue_depth": 128, 00:19:54.873 "max_io_qpairs_per_ctrlr": 127, 00:19:54.873 "in_capsule_data_size": 4096, 00:19:54.873 "max_io_size": 131072, 00:19:54.873 "io_unit_size": 131072, 00:19:54.873 "max_aq_depth": 128, 00:19:54.873 "num_shared_buffers": 511, 00:19:54.873 "buf_cache_size": 4294967295, 00:19:54.873 "dif_insert_or_strip": false, 00:19:54.873 "zcopy": false, 00:19:54.873 "c2h_success": false, 00:19:54.873 "sock_priority": 0, 00:19:54.873 "abort_timeout_sec": 1, 00:19:54.873 " 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:54.873 ack_timeout": 0, 00:19:54.873 "data_wr_pool_size": 0 00:19:54.873 } 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "method": "nvmf_create_subsystem", 00:19:54.873 "params": { 00:19:54.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.873 "allow_any_host": false, 00:19:54.873 "serial_number": "00000000000000000000", 00:19:54.873 "model_number": "SPDK bdev Controller", 00:19:54.873 "max_namespaces": 32, 00:19:54.873 "min_cntlid": 1, 00:19:54.873 "max_cntlid": 65519, 00:19:54.873 "ana_reporting": false 00:19:54.873 } 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "method": "nvmf_subsystem_add_host", 00:19:54.873 "params": { 00:19:54.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.873 "host": "nqn.2016-06.io.spdk:host1", 00:19:54.873 "psk": "key0" 00:19:54.873 } 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "method": "nvmf_subsystem_add_ns", 00:19:54.873 "params": { 00:19:54.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.873 "namespace": { 00:19:54.873 "nsid": 1, 00:19:54.873 "bdev_name": "malloc0", 00:19:54.873 "nguid": "8F6B4CD4D0B74B538C0C6F1A3859DC17", 00:19:54.873 "uuid": "8f6b4cd4-d0b7-4b53-8c0c-6f1a3859dc17", 00:19:54.873 "no_auto_visible": false 00:19:54.873 } 00:19:54.873 } 00:19:54.873 }, 00:19:54.873 { 00:19:54.873 "method": "nvmf_subsystem_add_listener", 00:19:54.873 "params": { 00:19:54.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.873 "listen_address": { 00:19:54.873 "trtype": "TCP", 00:19:54.873 "adrfam": "IPv4", 00:19:54.873 "traddr": "10.0.0.2", 00:19:54.873 "trsvcid": "4420" 00:19:54.873 }, 00:19:54.873 "secure_channel": false, 00:19:54.873 "sock_impl": "ssl" 00:19:54.873 } 00:19:54.873 } 00:19:54.873 ] 00:19:54.873 } 00:19:54.873 ] 00:19:54.873 }' 00:19:54.873 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.873 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3093606 00:19:54.873 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3093606 00:19:54.873 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:54.873 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3093606 ']' 00:19:54.873 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.873 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.873 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.873 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.873 21:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.134 [2024-07-24 21:45:02.998159] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:55.134 [2024-07-24 21:45:02.998208] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.134 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.134 [2024-07-24 21:45:03.054857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.134 [2024-07-24 21:45:03.133472] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.134 [2024-07-24 21:45:03.133506] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.134 [2024-07-24 21:45:03.133513] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.134 [2024-07-24 21:45:03.133519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.134 [2024-07-24 21:45:03.133524] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.134 [2024-07-24 21:45:03.133574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.393 [2024-07-24 21:45:03.345079] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.393 [2024-07-24 21:45:03.384155] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:55.393 [2024-07-24 21:45:03.384351] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.960 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.960 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:55.960 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:55.960 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:55.960 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.960 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.960 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3093641 00:19:55.960 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3093641 /var/tmp/bdevperf.sock 00:19:55.960 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3093641 ']' 00:19:55.960 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:55.960 "subsystems": [ 00:19:55.960 { 00:19:55.960 "subsystem": "keyring", 00:19:55.960 "config": [ 00:19:55.960 { 00:19:55.960 "method": "keyring_file_add_key", 00:19:55.960 "params": { 00:19:55.960 "name": "key0", 00:19:55.960 "path": "/tmp/tmp.I7JMZbPFGl" 00:19:55.960 } 00:19:55.960 } 00:19:55.960 ] 00:19:55.960 }, 00:19:55.960 { 00:19:55.960 "subsystem": "iobuf", 00:19:55.960 "config": [ 00:19:55.960 { 00:19:55.960 "method": "iobuf_set_options", 00:19:55.960 "params": { 00:19:55.960 "small_pool_count": 8192, 00:19:55.960 "large_pool_count": 1024, 00:19:55.960 "small_bufsize": 8192, 00:19:55.960 "large_bufsize": 135168 00:19:55.960 } 00:19:55.960 } 00:19:55.960 ] 00:19:55.960 }, 00:19:55.960 { 00:19:55.960 "subsystem": "sock", 00:19:55.960 "config": [ 00:19:55.960 { 00:19:55.960 "method": "sock_set_default_impl", 00:19:55.960 "params": { 00:19:55.960 "impl_name": "posix" 00:19:55.960 } 00:19:55.960 }, 00:19:55.960 { 00:19:55.960 "method": "sock_impl_set_options", 00:19:55.960 "params": { 00:19:55.960 "impl_name": "ssl", 00:19:55.960 "recv_buf_size": 4096, 00:19:55.960 "send_buf_size": 4096, 00:19:55.960 "enable_recv_pipe": true, 00:19:55.960 "enable_quickack": false, 00:19:55.960 "enable_placement_id": 0, 00:19:55.960 "enable_zerocopy_send_server": true, 00:19:55.960 "enable_zerocopy_send_client": false, 00:19:55.960 "zerocopy_threshold": 0, 00:19:55.960 "tls_version": 0, 00:19:55.960 "enable_ktls": false 00:19:55.960 } 00:19:55.960 }, 00:19:55.960 { 00:19:55.960 "method": "sock_impl_set_options", 00:19:55.960 "params": { 00:19:55.960 "impl_name": "posix", 00:19:55.960 "recv_buf_size": 2097152, 00:19:55.960 "send_buf_size": 2097152, 00:19:55.960 "enable_recv_pipe": true, 00:19:55.960 "enable_quickack": false, 00:19:55.960 "enable_placement_id": 0, 00:19:55.960 "enable_zerocopy_send_server": true, 00:19:55.960 "enable_zerocopy_send_client": false, 00:19:55.960 "zerocopy_threshold": 0, 00:19:55.960 "tls_version": 0, 00:19:55.960 "enable_ktls": false 00:19:55.960 } 00:19:55.960 } 00:19:55.960 ] 00:19:55.960 }, 00:19:55.960 { 00:19:55.960 "subsystem": "vmd", 00:19:55.960 "config": [] 00:19:55.960 }, 00:19:55.960 { 00:19:55.960 "subsystem": "accel", 00:19:55.960 "config": [ 00:19:55.960 { 00:19:55.960 "method": "accel_set_options", 00:19:55.960 "params": { 00:19:55.960 "small_cache_size": 128, 00:19:55.960 "large_cache_size": 16, 00:19:55.960 "task_count": 2048, 00:19:55.960 "sequence_count": 2048, 00:19:55.960 "buf_count": 2048 00:19:55.960 } 00:19:55.960 } 00:19:55.960 ] 00:19:55.960 }, 00:19:55.960 { 00:19:55.960 "subsystem": "bdev", 00:19:55.960 "config": [ 00:19:55.960 { 00:19:55.960 "method": "bdev_set_options", 00:19:55.960 "params": { 00:19:55.960 "bdev_io_pool_size": 65535, 00:19:55.960 "bdev_io_cache_size": 256, 00:19:55.960 "bdev_auto_examine": true, 00:19:55.960 "iobuf_small_cache_size": 128, 00:19:55.960 "iobuf_large_cache_size": 16 00:19:55.960 } 00:19:55.960 }, 00:19:55.960 { 00:19:55.960 "method": "bdev_raid_set_options", 00:19:55.960 "params": { 00:19:55.960 "process_window_size_kb": 1024, 00:19:55.960 "process_max_bandwidth_mb_sec": 0 00:19:55.960 } 00:19:55.960 }, 00:19:55.960 { 00:19:55.960 "method": "bdev_iscsi_set_options", 00:19:55.960 "params": { 00:19:55.960 "timeout_sec": 30 00:19:55.960 } 00:19:55.960 }, 00:19:55.960 { 00:19:55.960 "method": "bdev_nvme_set_options", 00:19:55.960 "params": { 00:19:55.960 "action_on_timeout": "none", 00:19:55.960 "timeout_us": 0, 00:19:55.960 "timeout_admin_us": 0, 00:19:55.960 "keep_alive_timeout_ms": 10000, 00:19:55.960 "arbitration_burst": 0, 00:19:55.960 "low_priority_weight": 0, 00:19:55.960 "medium_priority_weight": 0, 00:19:55.960 "high_priority_weight": 0, 00:19:55.960 "nvme_adminq_poll_period_us": 10000, 00:19:55.960 "nvme_ioq_poll_period_us": 0, 00:19:55.960 "io_queue_requests": 512, 00:19:55.960 "delay_cmd_submit": true, 00:19:55.960 "transport_retry_count": 4, 00:19:55.960 "bdev_retry_count": 3, 00:19:55.960 "transport_ack_timeout": 0, 00:19:55.960 "ctrlr_loss_timeout_sec": 0, 00:19:55.960 "reconnect_delay_sec": 0, 00:19:55.960 "fast_io_fail_timeout_sec": 0, 00:19:55.960 "disable_auto_failback": false, 00:19:55.960 "generate_uuids": false, 00:19:55.960 "transport_tos": 0, 00:19:55.960 "nvme_error_stat": false, 00:19:55.960 "rdma_srq_size": 0, 00:19:55.960 "io_path_stat": false, 00:19:55.960 "allow_accel_sequence": false, 00:19:55.960 "rdma_max_cq_size": 0, 00:19:55.960 "rdma_cm_event_timeout_ms": 0, 00:19:55.960 "dhchap_digests": [ 00:19:55.960 "sha256", 00:19:55.960 "sha384", 00:19:55.960 "sha512" 00:19:55.960 ], 00:19:55.960 "dhchap_dhgroups": [ 00:19:55.960 "null", 00:19:55.960 "ffdhe2048", 00:19:55.960 "ffdhe3072", 00:19:55.960 "ffdhe4096", 00:19:55.960 "ffdhe6144", 00:19:55.960 "ffdhe8192" 00:19:55.960 ] 00:19:55.960 } 00:19:55.960 }, 00:19:55.960 { 00:19:55.960 "method": "bdev_nvme_attach_controller", 00:19:55.960 "params": { 00:19:55.960 "name": "nvme0", 00:19:55.960 "trtype": "TCP", 00:19:55.960 "adrfam": "IPv4", 00:19:55.960 "traddr": "10.0.0.2", 00:19:55.960 "trsvcid": "4420", 00:19:55.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.961 "prchk_reftag": false, 00:19:55.961 "prchk_guard": false, 00:19:55.961 "ctrlr_loss_timeout_sec": 0, 00:19:55.961 "reconnect_delay_sec": 0, 00:19:55.961 "fast_io_fail_timeout_sec": 0, 00:19:55.961 "psk": "key0", 00:19:55.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.961 "hdgst": false, 00:19:55.961 "ddgst": false 00:19:55.961 } 00:19:55.961 }, 00:19:55.961 { 00:19:55.961 "method": "bdev_nvme_set_hotplug", 00:19:55.961 "params": { 00:19:55.961 "period_us": 100000, 00:19:55.961 "enable": false 00:19:55.961 } 00:19:55.961 }, 00:19:55.961 { 00:19:55.961 "method": "bdev_enable_histogram", 00:19:55.961 "params": { 00:19:55.961 "name": "nvme0n1", 00:19:55.961 "enable": true 00:19:55.961 } 00:19:55.961 }, 00:19:55.961 { 00:19:55.961 "method": "bdev_wait_for_examine" 00:19:55.961 } 00:19:55.961 ] 00:19:55.961 }, 00:19:55.961 { 00:19:55.961 "subsystem": "nbd", 00:19:55.961 "config": [] 00:19:55.961 } 00:19:55.961 ] 00:19:55.961 }' 00:19:55.961 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.961 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:55.961 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.961 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.961 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.961 21:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.961 [2024-07-24 21:45:03.874359] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:19:55.961 [2024-07-24 21:45:03.874405] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093641 ] 00:19:55.961 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.961 [2024-07-24 21:45:03.928922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.961 [2024-07-24 21:45:04.008327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.221 [2024-07-24 21:45:04.158520] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.789 21:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.789 21:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:56.789 21:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:56.789 21:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:56.789 21:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.789 21:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:57.048 Running I/O for 1 seconds... 00:19:57.985 00:19:57.985 Latency(us) 00:19:57.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.985 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:57.985 Verification LBA range: start 0x0 length 0x2000 00:19:57.985 nvme0n1 : 1.06 1195.66 4.67 0.00 0.00 104585.98 7237.45 176890.21 00:19:57.985 =================================================================================================================== 00:19:57.985 Total : 1195.66 4.67 0.00 0.00 104585.98 7237.45 176890.21 00:19:57.985 0 00:19:57.985 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:19:57.985 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:19:57.985 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:57.985 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:57.985 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:57.985 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:57.985 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:57.985 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:57.985 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:57.985 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:57.985 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:57.985 nvmf_trace.0 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3093641 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3093641 ']' 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3093641 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3093641 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3093641' 00:19:58.245 killing process with pid 3093641 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3093641 00:19:58.245 Received shutdown signal, test time was about 1.000000 seconds 00:19:58.245 00:19:58.245 Latency(us) 00:19:58.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.245 =================================================================================================================== 00:19:58.245 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3093641 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:58.245 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:58.506 rmmod nvme_tcp 00:19:58.506 rmmod nvme_fabrics 00:19:58.506 rmmod nvme_keyring 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3093606 ']' 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3093606 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3093606 ']' 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3093606 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3093606 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3093606' 00:19:58.506 killing process with pid 3093606 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3093606 00:19:58.506 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3093606 00:19:58.765 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:58.765 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:58.765 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:58.765 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.765 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:58.765 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.765 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.765 21:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.675 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:00.675 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.NQ0zG0li2i /tmp/tmp.incCBiJJt0 /tmp/tmp.I7JMZbPFGl 00:20:00.675 00:20:00.675 real 1m25.157s 00:20:00.675 user 2m14.195s 00:20:00.675 sys 0m26.521s 00:20:00.675 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:00.675 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.675 ************************************ 00:20:00.675 END TEST nvmf_tls 00:20:00.675 ************************************ 00:20:00.675 21:45:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:00.675 21:45:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:00.675 21:45:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.675 21:45:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:00.675 ************************************ 00:20:00.675 START TEST nvmf_fips 00:20:00.675 ************************************ 00:20:00.675 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:00.936 * Looking for test storage... 00:20:00.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:00.936 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:00.937 21:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:00.937 Error setting digest 00:20:00.937 00125155DF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:00.937 00125155DF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.937 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.196 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:01.196 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:01.196 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:01.196 21:45:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:06.474 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:06.474 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:06.474 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:06.475 Found net devices under 0000:86:00.0: cvl_0_0 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:06.475 Found net devices under 0000:86:00.1: cvl_0_1 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:06.475 21:45:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:06.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:20:06.475 00:20:06.475 --- 10.0.0.2 ping statistics --- 00:20:06.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.475 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:06.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:20:06.475 00:20:06.475 --- 10.0.0.1 ping statistics --- 00:20:06.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.475 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3098039 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3098039 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3098039 ']' 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.475 21:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:06.475 [2024-07-24 21:45:14.348021] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:20:06.475 [2024-07-24 21:45:14.348077] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.475 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.475 [2024-07-24 21:45:14.405316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.475 [2024-07-24 21:45:14.477693] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.475 [2024-07-24 21:45:14.477733] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.475 [2024-07-24 21:45:14.477740] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.475 [2024-07-24 21:45:14.477746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.475 [2024-07-24 21:45:14.477751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.475 [2024-07-24 21:45:14.477770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.044 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.044 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:07.044 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:07.044 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:07.044 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:07.302 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.302 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:07.302 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:07.302 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:07.302 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:07.302 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:07.302 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:07.302 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:07.302 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:07.302 [2024-07-24 21:45:15.324911] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.303 [2024-07-24 21:45:15.340925] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.303 [2024-07-24 21:45:15.341109] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.303 [2024-07-24 21:45:15.369175] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:07.303 malloc0 00:20:07.303 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:07.303 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3098191 00:20:07.303 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3098191 /var/tmp/bdevperf.sock 00:20:07.303 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:07.303 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3098191 ']' 00:20:07.303 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.303 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.303 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.303 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.303 21:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:07.562 [2024-07-24 21:45:15.451356] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:20:07.562 [2024-07-24 21:45:15.451410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098191 ] 00:20:07.562 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.562 [2024-07-24 21:45:15.502716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.562 [2024-07-24 21:45:15.575755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.132 21:45:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.132 21:45:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:08.132 21:45:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:08.392 [2024-07-24 21:45:16.394441] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:08.392 [2024-07-24 21:45:16.394528] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:08.392 TLSTESTn1 00:20:08.652 21:45:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.652 Running I/O for 10 seconds... 00:20:18.695 00:20:18.695 Latency(us) 00:20:18.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.695 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:18.695 Verification LBA range: start 0x0 length 0x2000 00:20:18.695 TLSTESTn1 : 10.08 1326.89 5.18 0.00 0.00 96154.37 4929.45 134947.17 00:20:18.695 =================================================================================================================== 00:20:18.695 Total : 1326.89 5.18 0.00 0.00 96154.37 4929.45 134947.17 00:20:18.695 0 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:18.695 nvmf_trace.0 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3098191 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3098191 ']' 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3098191 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:18.695 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3098191 00:20:18.954 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:18.954 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:18.954 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3098191' 00:20:18.954 killing process with pid 3098191 00:20:18.954 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3098191 00:20:18.954 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.954 00:20:18.954 Latency(us) 00:20:18.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.955 =================================================================================================================== 00:20:18.955 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.955 [2024-07-24 21:45:26.848753] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:18.955 21:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3098191 00:20:18.955 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:18.955 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:18.955 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:18.955 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:18.955 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:18.955 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:18.955 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:18.955 rmmod nvme_tcp 00:20:18.955 rmmod nvme_fabrics 00:20:18.955 rmmod nvme_keyring 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3098039 ']' 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3098039 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3098039 ']' 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3098039 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3098039 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3098039' 00:20:19.215 killing process with pid 3098039 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3098039 00:20:19.215 [2024-07-24 21:45:27.131299] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3098039 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.215 21:45:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.756 21:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:21.756 21:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:21.756 00:20:21.756 real 0m20.609s 00:20:21.756 user 0m23.140s 00:20:21.756 sys 0m8.285s 00:20:21.756 21:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:21.756 21:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.756 ************************************ 00:20:21.756 END TEST nvmf_fips 00:20:21.756 ************************************ 00:20:21.756 21:45:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:20:21.756 21:45:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:20:21.756 21:45:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:20:21.756 21:45:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:20:21.756 21:45:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:20:21.756 21:45:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:27.032 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:27.033 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:27.033 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:27.033 Found net devices under 0000:86:00.0: cvl_0_0 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:27.033 Found net devices under 0000:86:00.1: cvl_0_1 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:27.033 ************************************ 00:20:27.033 START TEST nvmf_perf_adq 00:20:27.033 ************************************ 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:27.033 * Looking for test storage... 00:20:27.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:27.033 21:45:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:31.233 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:31.233 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:31.233 Found net devices under 0000:86:00.0: cvl_0_0 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:31.233 Found net devices under 0000:86:00.1: cvl_0_1 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:31.233 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:31.234 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:31.234 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:31.234 21:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:32.172 21:45:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:34.088 21:45:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:39.372 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:39.372 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:39.372 Found net devices under 0000:86:00.0: cvl_0_0 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:39.372 Found net devices under 0000:86:00.1: cvl_0_1 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:39.372 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:39.373 21:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:39.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:20:39.373 00:20:39.373 --- 10.0.0.2 ping statistics --- 00:20:39.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.373 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:39.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:20:39.373 00:20:39.373 --- 10.0.0.1 ping statistics --- 00:20:39.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.373 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3107759 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3107759 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3107759 ']' 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.373 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.373 [2024-07-24 21:45:47.119671] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:20:39.373 [2024-07-24 21:45:47.119715] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.373 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.373 [2024-07-24 21:45:47.176230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.373 [2024-07-24 21:45:47.257402] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.373 [2024-07-24 21:45:47.257438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.373 [2024-07-24 21:45:47.257444] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.373 [2024-07-24 21:45:47.257450] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.373 [2024-07-24 21:45:47.257455] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.373 [2024-07-24 21:45:47.257499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.373 [2024-07-24 21:45:47.257593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.373 [2024-07-24 21:45:47.257610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.373 [2024-07-24 21:45:47.257612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.943 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.943 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:39.943 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.943 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.943 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.943 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.943 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:39.943 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:39.943 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:39.943 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.943 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.943 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.943 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:39.943 21:45:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:39.943 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.943 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.943 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.943 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:39.943 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.943 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.203 [2024-07-24 21:45:48.100623] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.203 Malloc1 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.203 [2024-07-24 21:45:48.156295] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3108010 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:40.203 21:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:40.203 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.153 21:45:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:42.153 21:45:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.153 21:45:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:42.153 21:45:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.153 21:45:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:42.153 "tick_rate": 2300000000, 00:20:42.153 "poll_groups": [ 00:20:42.153 { 00:20:42.153 "name": "nvmf_tgt_poll_group_000", 00:20:42.153 "admin_qpairs": 1, 00:20:42.153 "io_qpairs": 1, 00:20:42.153 "current_admin_qpairs": 1, 00:20:42.153 "current_io_qpairs": 1, 00:20:42.153 "pending_bdev_io": 0, 00:20:42.153 "completed_nvme_io": 19206, 00:20:42.153 "transports": [ 00:20:42.153 { 00:20:42.153 "trtype": "TCP" 00:20:42.153 } 00:20:42.153 ] 00:20:42.153 }, 00:20:42.153 { 00:20:42.153 "name": "nvmf_tgt_poll_group_001", 00:20:42.153 "admin_qpairs": 0, 00:20:42.153 "io_qpairs": 1, 00:20:42.153 "current_admin_qpairs": 0, 00:20:42.153 "current_io_qpairs": 1, 00:20:42.153 "pending_bdev_io": 0, 00:20:42.153 "completed_nvme_io": 19708, 00:20:42.153 "transports": [ 00:20:42.153 { 00:20:42.153 "trtype": "TCP" 00:20:42.153 } 00:20:42.153 ] 00:20:42.153 }, 00:20:42.153 { 00:20:42.153 "name": "nvmf_tgt_poll_group_002", 00:20:42.153 "admin_qpairs": 0, 00:20:42.153 "io_qpairs": 1, 00:20:42.153 "current_admin_qpairs": 0, 00:20:42.153 "current_io_qpairs": 1, 00:20:42.153 "pending_bdev_io": 0, 00:20:42.153 "completed_nvme_io": 18084, 00:20:42.153 "transports": [ 00:20:42.153 { 00:20:42.153 "trtype": "TCP" 00:20:42.153 } 00:20:42.153 ] 00:20:42.153 }, 00:20:42.153 { 00:20:42.153 "name": "nvmf_tgt_poll_group_003", 00:20:42.153 "admin_qpairs": 0, 00:20:42.153 "io_qpairs": 1, 00:20:42.153 "current_admin_qpairs": 0, 00:20:42.153 "current_io_qpairs": 1, 00:20:42.153 "pending_bdev_io": 0, 00:20:42.153 "completed_nvme_io": 19623, 00:20:42.153 "transports": [ 00:20:42.153 { 00:20:42.153 "trtype": "TCP" 00:20:42.153 } 00:20:42.153 ] 00:20:42.153 } 00:20:42.153 ] 00:20:42.153 }' 00:20:42.153 21:45:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:42.153 21:45:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:42.153 21:45:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:42.153 21:45:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:42.153 21:45:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3108010 00:20:50.277 Initializing NVMe Controllers 00:20:50.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:50.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:50.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:50.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:50.277 Initialization complete. Launching workers. 00:20:50.277 ======================================================== 00:20:50.277 Latency(us) 00:20:50.277 Device Information : IOPS MiB/s Average min max 00:20:50.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10403.50 40.64 6151.95 1658.91 9941.97 00:20:50.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10359.40 40.47 6178.02 1598.07 11955.27 00:20:50.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9871.40 38.56 6504.61 1676.54 47718.76 00:20:50.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10054.30 39.27 6365.73 1523.98 11956.75 00:20:50.277 ======================================================== 00:20:50.277 Total : 40688.59 158.94 6296.97 1523.98 47718.76 00:20:50.277 00:20:50.278 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:50.278 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:50.278 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:50.278 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:50.278 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:50.278 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:50.278 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:50.278 rmmod nvme_tcp 00:20:50.278 rmmod nvme_fabrics 00:20:50.537 rmmod nvme_keyring 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3107759 ']' 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3107759 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3107759 ']' 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3107759 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3107759 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3107759' 00:20:50.537 killing process with pid 3107759 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3107759 00:20:50.537 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3107759 00:20:50.797 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:50.797 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:50.797 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:50.797 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:50.797 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:50.797 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.797 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.797 21:45:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.706 21:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:52.706 21:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:52.706 21:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:54.087 21:46:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:55.467 21:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.746 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:00.747 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:00.747 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:00.747 Found net devices under 0000:86:00.0: cvl_0_0 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:00.747 Found net devices under 0000:86:00.1: cvl_0_1 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:00.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:21:00.747 00:21:00.747 --- 10.0.0.2 ping statistics --- 00:21:00.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.747 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:21:00.747 00:21:00.747 --- 10.0.0.1 ping statistics --- 00:21:00.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.747 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:00.747 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:01.006 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:01.006 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:01.006 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:01.007 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:01.007 net.core.busy_poll = 1 00:21:01.007 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:01.007 net.core.busy_read = 1 00:21:01.007 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:01.007 21:46:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3111795 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3111795 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3111795 ']' 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.007 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.267 [2024-07-24 21:46:09.134792] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:21:01.267 [2024-07-24 21:46:09.134834] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.267 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.267 [2024-07-24 21:46:09.191114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:01.267 [2024-07-24 21:46:09.270759] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.267 [2024-07-24 21:46:09.270799] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.267 [2024-07-24 21:46:09.270810] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.267 [2024-07-24 21:46:09.270816] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.267 [2024-07-24 21:46:09.270821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.267 [2024-07-24 21:46:09.270863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.267 [2024-07-24 21:46:09.270875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.267 [2024-07-24 21:46:09.270968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:01.267 [2024-07-24 21:46:09.270970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.836 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.836 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:01.836 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.836 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:01.836 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.096 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.096 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:02.096 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:02.096 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:02.096 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.096 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.096 21:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.096 [2024-07-24 21:46:10.129173] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.096 Malloc1 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.096 [2024-07-24 21:46:10.176979] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3111905 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:02.096 21:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:02.355 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.261 21:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:04.261 21:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.261 21:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.261 21:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.261 21:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:04.261 "tick_rate": 2300000000, 00:21:04.261 "poll_groups": [ 00:21:04.261 { 00:21:04.261 "name": "nvmf_tgt_poll_group_000", 00:21:04.261 "admin_qpairs": 1, 00:21:04.261 "io_qpairs": 1, 00:21:04.261 "current_admin_qpairs": 1, 00:21:04.261 "current_io_qpairs": 1, 00:21:04.261 "pending_bdev_io": 0, 00:21:04.261 "completed_nvme_io": 22372, 00:21:04.261 "transports": [ 00:21:04.261 { 00:21:04.261 "trtype": "TCP" 00:21:04.261 } 00:21:04.261 ] 00:21:04.261 }, 00:21:04.261 { 00:21:04.261 "name": "nvmf_tgt_poll_group_001", 00:21:04.261 "admin_qpairs": 0, 00:21:04.261 "io_qpairs": 3, 00:21:04.261 "current_admin_qpairs": 0, 00:21:04.261 "current_io_qpairs": 3, 00:21:04.261 "pending_bdev_io": 0, 00:21:04.261 "completed_nvme_io": 31563, 00:21:04.261 "transports": [ 00:21:04.261 { 00:21:04.261 "trtype": "TCP" 00:21:04.261 } 00:21:04.261 ] 00:21:04.261 }, 00:21:04.261 { 00:21:04.261 "name": "nvmf_tgt_poll_group_002", 00:21:04.261 "admin_qpairs": 0, 00:21:04.261 "io_qpairs": 0, 00:21:04.261 "current_admin_qpairs": 0, 00:21:04.261 "current_io_qpairs": 0, 00:21:04.261 "pending_bdev_io": 0, 00:21:04.261 "completed_nvme_io": 0, 00:21:04.261 "transports": [ 00:21:04.261 { 00:21:04.261 "trtype": "TCP" 00:21:04.261 } 00:21:04.261 ] 00:21:04.261 }, 00:21:04.261 { 00:21:04.261 "name": "nvmf_tgt_poll_group_003", 00:21:04.261 "admin_qpairs": 0, 00:21:04.261 "io_qpairs": 0, 00:21:04.261 "current_admin_qpairs": 0, 00:21:04.261 "current_io_qpairs": 0, 00:21:04.261 "pending_bdev_io": 0, 00:21:04.261 "completed_nvme_io": 0, 00:21:04.261 "transports": [ 00:21:04.261 { 00:21:04.261 "trtype": "TCP" 00:21:04.261 } 00:21:04.261 ] 00:21:04.261 } 00:21:04.261 ] 00:21:04.261 }' 00:21:04.261 21:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:04.261 21:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:04.261 21:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:04.261 21:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:04.261 21:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3111905 00:21:12.423 Initializing NVMe Controllers 00:21:12.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:12.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:12.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:12.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:12.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:12.423 Initialization complete. Launching workers. 00:21:12.423 ======================================================== 00:21:12.423 Latency(us) 00:21:12.423 Device Information : IOPS MiB/s Average min max 00:21:12.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11503.41 44.94 5563.67 1633.69 46526.44 00:21:12.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5500.31 21.49 11658.05 2156.38 56517.51 00:21:12.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5485.61 21.43 11666.21 2076.97 56497.86 00:21:12.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5618.01 21.95 11392.34 2060.85 56070.67 00:21:12.423 ======================================================== 00:21:12.423 Total : 28107.34 109.79 9112.30 1633.69 56517.51 00:21:12.423 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:12.423 rmmod nvme_tcp 00:21:12.423 rmmod nvme_fabrics 00:21:12.423 rmmod nvme_keyring 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3111795 ']' 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3111795 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3111795 ']' 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3111795 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3111795 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3111795' 00:21:12.423 killing process with pid 3111795 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3111795 00:21:12.423 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3111795 00:21:12.683 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:12.683 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:12.683 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:12.683 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:12.683 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:12.683 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.683 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.683 21:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.596 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:14.596 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:14.596 00:21:14.596 real 0m48.250s 00:21:14.596 user 2m48.601s 00:21:14.596 sys 0m9.150s 00:21:14.596 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:14.596 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:14.596 ************************************ 00:21:14.596 END TEST nvmf_perf_adq 00:21:14.596 ************************************ 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:14.857 ************************************ 00:21:14.857 START TEST nvmf_shutdown 00:21:14.857 ************************************ 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:14.857 * Looking for test storage... 00:21:14.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:14.857 ************************************ 00:21:14.857 START TEST nvmf_shutdown_tc1 00:21:14.857 ************************************ 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:14.857 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:14.858 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:14.858 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.858 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.858 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.858 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:14.858 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:14.858 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:14.858 21:46:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:20.137 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:20.137 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:20.137 Found net devices under 0000:86:00.0: cvl_0_0 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.137 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:20.138 Found net devices under 0000:86:00.1: cvl_0_1 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:20.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:21:20.138 00:21:20.138 --- 10.0.0.2 ping statistics --- 00:21:20.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.138 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:20.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:21:20.138 00:21:20.138 --- 10.0.0.1 ping statistics --- 00:21:20.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.138 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:20.138 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:20.398 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:20.398 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.398 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:20.398 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:20.398 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3117042 00:21:20.398 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3117042 00:21:20.398 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3117042 ']' 00:21:20.398 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.398 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.398 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.398 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.398 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:20.398 21:46:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:20.398 [2024-07-24 21:46:28.318108] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:21:20.398 [2024-07-24 21:46:28.318149] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.398 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.398 [2024-07-24 21:46:28.377686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:20.398 [2024-07-24 21:46:28.460209] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.398 [2024-07-24 21:46:28.460243] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.398 [2024-07-24 21:46:28.460250] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.398 [2024-07-24 21:46:28.460257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.398 [2024-07-24 21:46:28.460262] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.398 [2024-07-24 21:46:28.460303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.398 [2024-07-24 21:46:28.460390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.398 [2024-07-24 21:46:28.460496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.398 [2024-07-24 21:46:28.460497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.339 [2024-07-24 21:46:29.167450] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.339 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.339 Malloc1 00:21:21.339 [2024-07-24 21:46:29.263128] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.339 Malloc2 00:21:21.339 Malloc3 00:21:21.339 Malloc4 00:21:21.339 Malloc5 00:21:21.339 Malloc6 00:21:21.600 Malloc7 00:21:21.600 Malloc8 00:21:21.600 Malloc9 00:21:21.600 Malloc10 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3117328 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3117328 /var/tmp/bdevperf.sock 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3117328 ']' 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.600 { 00:21:21.600 "params": { 00:21:21.600 "name": "Nvme$subsystem", 00:21:21.600 "trtype": "$TEST_TRANSPORT", 00:21:21.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.600 "adrfam": "ipv4", 00:21:21.600 "trsvcid": "$NVMF_PORT", 00:21:21.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.600 "hdgst": ${hdgst:-false}, 00:21:21.600 "ddgst": ${ddgst:-false} 00:21:21.600 }, 00:21:21.600 "method": "bdev_nvme_attach_controller" 00:21:21.600 } 00:21:21.600 EOF 00:21:21.600 )") 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.600 { 00:21:21.600 "params": { 00:21:21.600 "name": "Nvme$subsystem", 00:21:21.600 "trtype": "$TEST_TRANSPORT", 00:21:21.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.600 "adrfam": "ipv4", 00:21:21.600 "trsvcid": "$NVMF_PORT", 00:21:21.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.600 "hdgst": ${hdgst:-false}, 00:21:21.600 "ddgst": ${ddgst:-false} 00:21:21.600 }, 00:21:21.600 "method": "bdev_nvme_attach_controller" 00:21:21.600 } 00:21:21.600 EOF 00:21:21.600 )") 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.600 { 00:21:21.600 "params": { 00:21:21.600 "name": "Nvme$subsystem", 00:21:21.600 "trtype": "$TEST_TRANSPORT", 00:21:21.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.600 "adrfam": "ipv4", 00:21:21.600 "trsvcid": "$NVMF_PORT", 00:21:21.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.600 "hdgst": ${hdgst:-false}, 00:21:21.600 "ddgst": ${ddgst:-false} 00:21:21.600 }, 00:21:21.600 "method": "bdev_nvme_attach_controller" 00:21:21.600 } 00:21:21.600 EOF 00:21:21.600 )") 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.600 { 00:21:21.600 "params": { 00:21:21.600 "name": "Nvme$subsystem", 00:21:21.600 "trtype": "$TEST_TRANSPORT", 00:21:21.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.600 "adrfam": "ipv4", 00:21:21.600 "trsvcid": "$NVMF_PORT", 00:21:21.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.600 "hdgst": ${hdgst:-false}, 00:21:21.600 "ddgst": ${ddgst:-false} 00:21:21.600 }, 00:21:21.600 "method": "bdev_nvme_attach_controller" 00:21:21.600 } 00:21:21.600 EOF 00:21:21.600 )") 00:21:21.600 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.861 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.861 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.861 { 00:21:21.861 "params": { 00:21:21.861 "name": "Nvme$subsystem", 00:21:21.861 "trtype": "$TEST_TRANSPORT", 00:21:21.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.861 "adrfam": "ipv4", 00:21:21.861 "trsvcid": "$NVMF_PORT", 00:21:21.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.861 "hdgst": ${hdgst:-false}, 00:21:21.861 "ddgst": ${ddgst:-false} 00:21:21.861 }, 00:21:21.861 "method": "bdev_nvme_attach_controller" 00:21:21.861 } 00:21:21.861 EOF 00:21:21.861 )") 00:21:21.861 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.861 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.861 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.861 { 00:21:21.861 "params": { 00:21:21.861 "name": "Nvme$subsystem", 00:21:21.861 "trtype": "$TEST_TRANSPORT", 00:21:21.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.861 "adrfam": "ipv4", 00:21:21.861 "trsvcid": "$NVMF_PORT", 00:21:21.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.861 "hdgst": ${hdgst:-false}, 00:21:21.861 "ddgst": ${ddgst:-false} 00:21:21.861 }, 00:21:21.861 "method": "bdev_nvme_attach_controller" 00:21:21.861 } 00:21:21.861 EOF 00:21:21.861 )") 00:21:21.861 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.861 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.861 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.861 { 00:21:21.861 "params": { 00:21:21.861 "name": "Nvme$subsystem", 00:21:21.861 "trtype": "$TEST_TRANSPORT", 00:21:21.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.861 "adrfam": "ipv4", 00:21:21.861 "trsvcid": "$NVMF_PORT", 00:21:21.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.861 "hdgst": ${hdgst:-false}, 00:21:21.861 "ddgst": ${ddgst:-false} 00:21:21.861 }, 00:21:21.861 "method": "bdev_nvme_attach_controller" 00:21:21.861 } 00:21:21.861 EOF 00:21:21.861 )") 00:21:21.861 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.861 [2024-07-24 21:46:29.737300] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:21:21.861 [2024-07-24 21:46:29.737347] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:21.861 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.861 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.862 { 00:21:21.862 "params": { 00:21:21.862 "name": "Nvme$subsystem", 00:21:21.862 "trtype": "$TEST_TRANSPORT", 00:21:21.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.862 "adrfam": "ipv4", 00:21:21.862 "trsvcid": "$NVMF_PORT", 00:21:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.862 "hdgst": ${hdgst:-false}, 00:21:21.862 "ddgst": ${ddgst:-false} 00:21:21.862 }, 00:21:21.862 "method": "bdev_nvme_attach_controller" 00:21:21.862 } 00:21:21.862 EOF 00:21:21.862 )") 00:21:21.862 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.862 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.862 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.862 { 00:21:21.862 "params": { 00:21:21.862 "name": "Nvme$subsystem", 00:21:21.862 "trtype": "$TEST_TRANSPORT", 00:21:21.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.862 "adrfam": "ipv4", 00:21:21.862 "trsvcid": "$NVMF_PORT", 00:21:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.862 "hdgst": ${hdgst:-false}, 00:21:21.862 "ddgst": ${ddgst:-false} 00:21:21.862 }, 00:21:21.862 "method": "bdev_nvme_attach_controller" 00:21:21.862 } 00:21:21.862 EOF 00:21:21.862 )") 00:21:21.862 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.862 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.862 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.862 { 00:21:21.862 "params": { 00:21:21.862 "name": "Nvme$subsystem", 00:21:21.862 "trtype": "$TEST_TRANSPORT", 00:21:21.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.862 "adrfam": "ipv4", 00:21:21.862 "trsvcid": "$NVMF_PORT", 00:21:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.862 "hdgst": ${hdgst:-false}, 00:21:21.862 "ddgst": ${ddgst:-false} 00:21:21.862 }, 00:21:21.862 "method": "bdev_nvme_attach_controller" 00:21:21.862 } 00:21:21.862 EOF 00:21:21.862 )") 00:21:21.862 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.862 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:21.862 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.862 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:21.862 21:46:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:21.862 "params": { 00:21:21.862 "name": "Nvme1", 00:21:21.862 "trtype": "tcp", 00:21:21.862 "traddr": "10.0.0.2", 00:21:21.862 "adrfam": "ipv4", 00:21:21.862 "trsvcid": "4420", 00:21:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.862 "hdgst": false, 00:21:21.862 "ddgst": false 00:21:21.862 }, 00:21:21.862 "method": "bdev_nvme_attach_controller" 00:21:21.862 },{ 00:21:21.862 "params": { 00:21:21.862 "name": "Nvme2", 00:21:21.862 "trtype": "tcp", 00:21:21.862 "traddr": "10.0.0.2", 00:21:21.862 "adrfam": "ipv4", 00:21:21.862 "trsvcid": "4420", 00:21:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:21.862 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:21.862 "hdgst": false, 00:21:21.862 "ddgst": false 00:21:21.862 }, 00:21:21.862 "method": "bdev_nvme_attach_controller" 00:21:21.862 },{ 00:21:21.862 "params": { 00:21:21.862 "name": "Nvme3", 00:21:21.862 "trtype": "tcp", 00:21:21.862 "traddr": "10.0.0.2", 00:21:21.862 "adrfam": "ipv4", 00:21:21.862 "trsvcid": "4420", 00:21:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:21.862 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:21.862 "hdgst": false, 00:21:21.862 "ddgst": false 00:21:21.862 }, 00:21:21.862 "method": "bdev_nvme_attach_controller" 00:21:21.862 },{ 00:21:21.862 "params": { 00:21:21.862 "name": "Nvme4", 00:21:21.862 "trtype": "tcp", 00:21:21.862 "traddr": "10.0.0.2", 00:21:21.862 "adrfam": "ipv4", 00:21:21.862 "trsvcid": "4420", 00:21:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:21.862 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:21.862 "hdgst": false, 00:21:21.862 "ddgst": false 00:21:21.862 }, 00:21:21.862 "method": "bdev_nvme_attach_controller" 00:21:21.862 },{ 00:21:21.862 "params": { 00:21:21.862 "name": "Nvme5", 00:21:21.862 "trtype": "tcp", 00:21:21.862 "traddr": "10.0.0.2", 00:21:21.862 "adrfam": "ipv4", 00:21:21.862 "trsvcid": "4420", 00:21:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:21.862 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:21.862 "hdgst": false, 00:21:21.862 "ddgst": false 00:21:21.862 }, 00:21:21.862 "method": "bdev_nvme_attach_controller" 00:21:21.862 },{ 00:21:21.862 "params": { 00:21:21.862 "name": "Nvme6", 00:21:21.862 "trtype": "tcp", 00:21:21.862 "traddr": "10.0.0.2", 00:21:21.862 "adrfam": "ipv4", 00:21:21.862 "trsvcid": "4420", 00:21:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:21.862 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:21.862 "hdgst": false, 00:21:21.862 "ddgst": false 00:21:21.862 }, 00:21:21.862 "method": "bdev_nvme_attach_controller" 00:21:21.862 },{ 00:21:21.862 "params": { 00:21:21.862 "name": "Nvme7", 00:21:21.862 "trtype": "tcp", 00:21:21.862 "traddr": "10.0.0.2", 00:21:21.862 "adrfam": "ipv4", 00:21:21.862 "trsvcid": "4420", 00:21:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:21.862 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:21.862 "hdgst": false, 00:21:21.862 "ddgst": false 00:21:21.862 }, 00:21:21.862 "method": "bdev_nvme_attach_controller" 00:21:21.862 },{ 00:21:21.862 "params": { 00:21:21.862 "name": "Nvme8", 00:21:21.862 "trtype": "tcp", 00:21:21.862 "traddr": "10.0.0.2", 00:21:21.862 "adrfam": "ipv4", 00:21:21.862 "trsvcid": "4420", 00:21:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:21.862 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:21.862 "hdgst": false, 00:21:21.862 "ddgst": false 00:21:21.862 }, 00:21:21.862 "method": "bdev_nvme_attach_controller" 00:21:21.862 },{ 00:21:21.862 "params": { 00:21:21.862 "name": "Nvme9", 00:21:21.862 "trtype": "tcp", 00:21:21.862 "traddr": "10.0.0.2", 00:21:21.862 "adrfam": "ipv4", 00:21:21.862 "trsvcid": "4420", 00:21:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:21.862 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:21.862 "hdgst": false, 00:21:21.862 "ddgst": false 00:21:21.862 }, 00:21:21.862 "method": "bdev_nvme_attach_controller" 00:21:21.862 },{ 00:21:21.862 "params": { 00:21:21.862 "name": "Nvme10", 00:21:21.862 "trtype": "tcp", 00:21:21.862 "traddr": "10.0.0.2", 00:21:21.862 "adrfam": "ipv4", 00:21:21.862 "trsvcid": "4420", 00:21:21.862 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:21.862 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:21.862 "hdgst": false, 00:21:21.862 "ddgst": false 00:21:21.862 }, 00:21:21.862 "method": "bdev_nvme_attach_controller" 00:21:21.862 }' 00:21:21.862 [2024-07-24 21:46:29.793655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.862 [2024-07-24 21:46:29.867709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.244 21:46:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.244 21:46:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:23.244 21:46:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:23.244 21:46:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.244 21:46:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.244 21:46:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.244 21:46:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3117328 00:21:23.244 21:46:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:23.244 21:46:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:24.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3117328 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3117042 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.184 { 00:21:24.184 "params": { 00:21:24.184 "name": "Nvme$subsystem", 00:21:24.184 "trtype": "$TEST_TRANSPORT", 00:21:24.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.184 "adrfam": "ipv4", 00:21:24.184 "trsvcid": "$NVMF_PORT", 00:21:24.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.184 "hdgst": ${hdgst:-false}, 00:21:24.184 "ddgst": ${ddgst:-false} 00:21:24.184 }, 00:21:24.184 "method": "bdev_nvme_attach_controller" 00:21:24.184 } 00:21:24.184 EOF 00:21:24.184 )") 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.184 { 00:21:24.184 "params": { 00:21:24.184 "name": "Nvme$subsystem", 00:21:24.184 "trtype": "$TEST_TRANSPORT", 00:21:24.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.184 "adrfam": "ipv4", 00:21:24.184 "trsvcid": "$NVMF_PORT", 00:21:24.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.184 "hdgst": ${hdgst:-false}, 00:21:24.184 "ddgst": ${ddgst:-false} 00:21:24.184 }, 00:21:24.184 "method": "bdev_nvme_attach_controller" 00:21:24.184 } 00:21:24.184 EOF 00:21:24.184 )") 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.184 { 00:21:24.184 "params": { 00:21:24.184 "name": "Nvme$subsystem", 00:21:24.184 "trtype": "$TEST_TRANSPORT", 00:21:24.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.184 "adrfam": "ipv4", 00:21:24.184 "trsvcid": "$NVMF_PORT", 00:21:24.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.184 "hdgst": ${hdgst:-false}, 00:21:24.184 "ddgst": ${ddgst:-false} 00:21:24.184 }, 00:21:24.184 "method": "bdev_nvme_attach_controller" 00:21:24.184 } 00:21:24.184 EOF 00:21:24.184 )") 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.184 { 00:21:24.184 "params": { 00:21:24.184 "name": "Nvme$subsystem", 00:21:24.184 "trtype": "$TEST_TRANSPORT", 00:21:24.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.184 "adrfam": "ipv4", 00:21:24.184 "trsvcid": "$NVMF_PORT", 00:21:24.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.184 "hdgst": ${hdgst:-false}, 00:21:24.184 "ddgst": ${ddgst:-false} 00:21:24.184 }, 00:21:24.184 "method": "bdev_nvme_attach_controller" 00:21:24.184 } 00:21:24.184 EOF 00:21:24.184 )") 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.184 { 00:21:24.184 "params": { 00:21:24.184 "name": "Nvme$subsystem", 00:21:24.184 "trtype": "$TEST_TRANSPORT", 00:21:24.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.184 "adrfam": "ipv4", 00:21:24.184 "trsvcid": "$NVMF_PORT", 00:21:24.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.184 "hdgst": ${hdgst:-false}, 00:21:24.184 "ddgst": ${ddgst:-false} 00:21:24.184 }, 00:21:24.184 "method": "bdev_nvme_attach_controller" 00:21:24.184 } 00:21:24.184 EOF 00:21:24.184 )") 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.184 { 00:21:24.184 "params": { 00:21:24.184 "name": "Nvme$subsystem", 00:21:24.184 "trtype": "$TEST_TRANSPORT", 00:21:24.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.184 "adrfam": "ipv4", 00:21:24.184 "trsvcid": "$NVMF_PORT", 00:21:24.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.184 "hdgst": ${hdgst:-false}, 00:21:24.184 "ddgst": ${ddgst:-false} 00:21:24.184 }, 00:21:24.184 "method": "bdev_nvme_attach_controller" 00:21:24.184 } 00:21:24.184 EOF 00:21:24.184 )") 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.184 [2024-07-24 21:46:32.273285] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:21:24.184 [2024-07-24 21:46:32.273339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3117809 ] 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.184 { 00:21:24.184 "params": { 00:21:24.184 "name": "Nvme$subsystem", 00:21:24.184 "trtype": "$TEST_TRANSPORT", 00:21:24.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.184 "adrfam": "ipv4", 00:21:24.184 "trsvcid": "$NVMF_PORT", 00:21:24.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.184 "hdgst": ${hdgst:-false}, 00:21:24.184 "ddgst": ${ddgst:-false} 00:21:24.184 }, 00:21:24.184 "method": "bdev_nvme_attach_controller" 00:21:24.184 } 00:21:24.184 EOF 00:21:24.184 )") 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.184 { 00:21:24.184 "params": { 00:21:24.184 "name": "Nvme$subsystem", 00:21:24.184 "trtype": "$TEST_TRANSPORT", 00:21:24.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.184 "adrfam": "ipv4", 00:21:24.184 "trsvcid": "$NVMF_PORT", 00:21:24.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.184 "hdgst": ${hdgst:-false}, 00:21:24.184 "ddgst": ${ddgst:-false} 00:21:24.184 }, 00:21:24.184 "method": "bdev_nvme_attach_controller" 00:21:24.184 } 00:21:24.184 EOF 00:21:24.184 )") 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.184 { 00:21:24.184 "params": { 00:21:24.184 "name": "Nvme$subsystem", 00:21:24.184 "trtype": "$TEST_TRANSPORT", 00:21:24.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.184 "adrfam": "ipv4", 00:21:24.184 "trsvcid": "$NVMF_PORT", 00:21:24.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.184 "hdgst": ${hdgst:-false}, 00:21:24.184 "ddgst": ${ddgst:-false} 00:21:24.184 }, 00:21:24.184 "method": "bdev_nvme_attach_controller" 00:21:24.184 } 00:21:24.184 EOF 00:21:24.184 )") 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.184 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.184 { 00:21:24.185 "params": { 00:21:24.185 "name": "Nvme$subsystem", 00:21:24.185 "trtype": "$TEST_TRANSPORT", 00:21:24.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.185 "adrfam": "ipv4", 00:21:24.185 "trsvcid": "$NVMF_PORT", 00:21:24.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.185 "hdgst": ${hdgst:-false}, 00:21:24.185 "ddgst": ${ddgst:-false} 00:21:24.185 }, 00:21:24.185 "method": "bdev_nvme_attach_controller" 00:21:24.185 } 00:21:24.185 EOF 00:21:24.185 )") 00:21:24.185 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.185 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.445 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:24.445 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:24.445 21:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:24.445 "params": { 00:21:24.445 "name": "Nvme1", 00:21:24.445 "trtype": "tcp", 00:21:24.445 "traddr": "10.0.0.2", 00:21:24.445 "adrfam": "ipv4", 00:21:24.445 "trsvcid": "4420", 00:21:24.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:24.445 "hdgst": false, 00:21:24.445 "ddgst": false 00:21:24.445 }, 00:21:24.445 "method": "bdev_nvme_attach_controller" 00:21:24.445 },{ 00:21:24.445 "params": { 00:21:24.445 "name": "Nvme2", 00:21:24.445 "trtype": "tcp", 00:21:24.445 "traddr": "10.0.0.2", 00:21:24.445 "adrfam": "ipv4", 00:21:24.445 "trsvcid": "4420", 00:21:24.445 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:24.445 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:24.445 "hdgst": false, 00:21:24.445 "ddgst": false 00:21:24.445 }, 00:21:24.445 "method": "bdev_nvme_attach_controller" 00:21:24.445 },{ 00:21:24.445 "params": { 00:21:24.445 "name": "Nvme3", 00:21:24.445 "trtype": "tcp", 00:21:24.445 "traddr": "10.0.0.2", 00:21:24.445 "adrfam": "ipv4", 00:21:24.445 "trsvcid": "4420", 00:21:24.445 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:24.445 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:24.445 "hdgst": false, 00:21:24.445 "ddgst": false 00:21:24.445 }, 00:21:24.445 "method": "bdev_nvme_attach_controller" 00:21:24.445 },{ 00:21:24.445 "params": { 00:21:24.445 "name": "Nvme4", 00:21:24.445 "trtype": "tcp", 00:21:24.445 "traddr": "10.0.0.2", 00:21:24.445 "adrfam": "ipv4", 00:21:24.445 "trsvcid": "4420", 00:21:24.445 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:24.445 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:24.445 "hdgst": false, 00:21:24.445 "ddgst": false 00:21:24.445 }, 00:21:24.445 "method": "bdev_nvme_attach_controller" 00:21:24.445 },{ 00:21:24.445 "params": { 00:21:24.445 "name": "Nvme5", 00:21:24.445 "trtype": "tcp", 00:21:24.445 "traddr": "10.0.0.2", 00:21:24.445 "adrfam": "ipv4", 00:21:24.445 "trsvcid": "4420", 00:21:24.445 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:24.445 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:24.445 "hdgst": false, 00:21:24.445 "ddgst": false 00:21:24.445 }, 00:21:24.445 "method": "bdev_nvme_attach_controller" 00:21:24.445 },{ 00:21:24.445 "params": { 00:21:24.445 "name": "Nvme6", 00:21:24.445 "trtype": "tcp", 00:21:24.445 "traddr": "10.0.0.2", 00:21:24.445 "adrfam": "ipv4", 00:21:24.445 "trsvcid": "4420", 00:21:24.445 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:24.445 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:24.445 "hdgst": false, 00:21:24.445 "ddgst": false 00:21:24.445 }, 00:21:24.445 "method": "bdev_nvme_attach_controller" 00:21:24.445 },{ 00:21:24.445 "params": { 00:21:24.445 "name": "Nvme7", 00:21:24.445 "trtype": "tcp", 00:21:24.445 "traddr": "10.0.0.2", 00:21:24.445 "adrfam": "ipv4", 00:21:24.445 "trsvcid": "4420", 00:21:24.445 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:24.445 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:24.445 "hdgst": false, 00:21:24.445 "ddgst": false 00:21:24.445 }, 00:21:24.445 "method": "bdev_nvme_attach_controller" 00:21:24.445 },{ 00:21:24.445 "params": { 00:21:24.445 "name": "Nvme8", 00:21:24.445 "trtype": "tcp", 00:21:24.445 "traddr": "10.0.0.2", 00:21:24.445 "adrfam": "ipv4", 00:21:24.445 "trsvcid": "4420", 00:21:24.445 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:24.445 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:24.445 "hdgst": false, 00:21:24.445 "ddgst": false 00:21:24.445 }, 00:21:24.445 "method": "bdev_nvme_attach_controller" 00:21:24.445 },{ 00:21:24.445 "params": { 00:21:24.445 "name": "Nvme9", 00:21:24.445 "trtype": "tcp", 00:21:24.445 "traddr": "10.0.0.2", 00:21:24.445 "adrfam": "ipv4", 00:21:24.445 "trsvcid": "4420", 00:21:24.445 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:24.445 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:24.445 "hdgst": false, 00:21:24.445 "ddgst": false 00:21:24.445 }, 00:21:24.445 "method": "bdev_nvme_attach_controller" 00:21:24.445 },{ 00:21:24.445 "params": { 00:21:24.445 "name": "Nvme10", 00:21:24.445 "trtype": "tcp", 00:21:24.445 "traddr": "10.0.0.2", 00:21:24.445 "adrfam": "ipv4", 00:21:24.445 "trsvcid": "4420", 00:21:24.445 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:24.445 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:24.445 "hdgst": false, 00:21:24.445 "ddgst": false 00:21:24.445 }, 00:21:24.445 "method": "bdev_nvme_attach_controller" 00:21:24.445 }' 00:21:24.445 [2024-07-24 21:46:32.329228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.445 [2024-07-24 21:46:32.403998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.831 Running I/O for 1 seconds... 00:21:26.771 00:21:26.771 Latency(us) 00:21:26.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.771 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.771 Verification LBA range: start 0x0 length 0x400 00:21:26.771 Nvme1n1 : 1.13 226.96 14.18 0.00 0.00 278154.46 23137.06 237069.36 00:21:26.771 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.771 Verification LBA range: start 0x0 length 0x400 00:21:26.771 Nvme2n1 : 1.11 288.31 18.02 0.00 0.00 216777.24 20515.62 225215.89 00:21:26.771 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.771 Verification LBA range: start 0x0 length 0x400 00:21:26.771 Nvme3n1 : 1.12 229.48 14.34 0.00 0.00 267282.25 22225.25 255305.46 00:21:26.771 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.771 Verification LBA range: start 0x0 length 0x400 00:21:26.771 Nvme4n1 : 1.10 290.62 18.16 0.00 0.00 208570.10 21883.33 224304.08 00:21:26.771 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.771 Verification LBA range: start 0x0 length 0x400 00:21:26.772 Nvme5n1 : 1.14 280.95 17.56 0.00 0.00 213201.43 20629.59 223392.28 00:21:26.772 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.772 Verification LBA range: start 0x0 length 0x400 00:21:26.772 Nvme6n1 : 1.14 223.77 13.99 0.00 0.00 263772.83 21655.37 275365.18 00:21:26.772 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.772 Verification LBA range: start 0x0 length 0x400 00:21:26.772 Nvme7n1 : 1.13 226.60 14.16 0.00 0.00 256359.74 23137.06 279012.40 00:21:26.772 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.772 Verification LBA range: start 0x0 length 0x400 00:21:26.772 Nvme8n1 : 1.12 228.09 14.26 0.00 0.00 250414.30 23592.96 251658.24 00:21:26.772 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.772 Verification LBA range: start 0x0 length 0x400 00:21:26.772 Nvme9n1 : 1.14 279.54 17.47 0.00 0.00 201489.36 20059.71 230686.72 00:21:26.772 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.772 Verification LBA range: start 0x0 length 0x400 00:21:26.772 Nvme10n1 : 1.15 278.38 17.40 0.00 0.00 199566.02 19831.76 229774.91 00:21:26.772 =================================================================================================================== 00:21:26.772 Total : 2552.69 159.54 0.00 0.00 232487.89 19831.76 279012.40 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:27.032 rmmod nvme_tcp 00:21:27.032 rmmod nvme_fabrics 00:21:27.032 rmmod nvme_keyring 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3117042 ']' 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3117042 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3117042 ']' 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3117042 00:21:27.032 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:27.293 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:27.293 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3117042 00:21:27.293 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:27.293 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:27.293 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3117042' 00:21:27.293 killing process with pid 3117042 00:21:27.293 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3117042 00:21:27.293 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3117042 00:21:27.553 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:27.553 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:27.553 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:27.553 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:27.553 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:27.553 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.553 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.553 21:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:30.096 00:21:30.096 real 0m14.726s 00:21:30.096 user 0m33.599s 00:21:30.096 sys 0m5.387s 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:30.096 ************************************ 00:21:30.096 END TEST nvmf_shutdown_tc1 00:21:30.096 ************************************ 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:30.096 ************************************ 00:21:30.096 START TEST nvmf_shutdown_tc2 00:21:30.096 ************************************ 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:30.096 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:30.096 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.096 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:30.097 Found net devices under 0000:86:00.0: cvl_0_0 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:30.097 Found net devices under 0000:86:00.1: cvl_0_1 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.097 21:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:30.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:21:30.097 00:21:30.097 --- 10.0.0.2 ping statistics --- 00:21:30.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.097 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:21:30.097 00:21:30.097 --- 10.0.0.1 ping statistics --- 00:21:30.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.097 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3118832 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3118832 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3118832 ']' 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.097 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.097 [2024-07-24 21:46:38.104328] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:21:30.097 [2024-07-24 21:46:38.104376] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.097 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.097 [2024-07-24 21:46:38.161571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:30.407 [2024-07-24 21:46:38.243021] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.407 [2024-07-24 21:46:38.243061] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.407 [2024-07-24 21:46:38.243069] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.407 [2024-07-24 21:46:38.243075] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.407 [2024-07-24 21:46:38.243081] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.407 [2024-07-24 21:46:38.243137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.407 [2024-07-24 21:46:38.243224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:30.407 [2024-07-24 21:46:38.243331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.407 [2024-07-24 21:46:38.243332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.977 [2024-07-24 21:46:38.969301] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.977 21:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.977 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.977 Malloc1 00:21:30.977 [2024-07-24 21:46:39.065247] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.977 Malloc2 00:21:31.238 Malloc3 00:21:31.238 Malloc4 00:21:31.238 Malloc5 00:21:31.238 Malloc6 00:21:31.238 Malloc7 00:21:31.238 Malloc8 00:21:31.499 Malloc9 00:21:31.499 Malloc10 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3119114 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3119114 /var/tmp/bdevperf.sock 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3119114 ']' 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.499 { 00:21:31.499 "params": { 00:21:31.499 "name": "Nvme$subsystem", 00:21:31.499 "trtype": "$TEST_TRANSPORT", 00:21:31.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.499 "adrfam": "ipv4", 00:21:31.499 "trsvcid": "$NVMF_PORT", 00:21:31.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.499 "hdgst": ${hdgst:-false}, 00:21:31.499 "ddgst": ${ddgst:-false} 00:21:31.499 }, 00:21:31.499 "method": "bdev_nvme_attach_controller" 00:21:31.499 } 00:21:31.499 EOF 00:21:31.499 )") 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.499 { 00:21:31.499 "params": { 00:21:31.499 "name": "Nvme$subsystem", 00:21:31.499 "trtype": "$TEST_TRANSPORT", 00:21:31.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.499 "adrfam": "ipv4", 00:21:31.499 "trsvcid": "$NVMF_PORT", 00:21:31.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.499 "hdgst": ${hdgst:-false}, 00:21:31.499 "ddgst": ${ddgst:-false} 00:21:31.499 }, 00:21:31.499 "method": "bdev_nvme_attach_controller" 00:21:31.499 } 00:21:31.499 EOF 00:21:31.499 )") 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.499 { 00:21:31.499 "params": { 00:21:31.499 "name": "Nvme$subsystem", 00:21:31.499 "trtype": "$TEST_TRANSPORT", 00:21:31.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.499 "adrfam": "ipv4", 00:21:31.499 "trsvcid": "$NVMF_PORT", 00:21:31.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.499 "hdgst": ${hdgst:-false}, 00:21:31.499 "ddgst": ${ddgst:-false} 00:21:31.499 }, 00:21:31.499 "method": "bdev_nvme_attach_controller" 00:21:31.499 } 00:21:31.499 EOF 00:21:31.499 )") 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.499 { 00:21:31.499 "params": { 00:21:31.499 "name": "Nvme$subsystem", 00:21:31.499 "trtype": "$TEST_TRANSPORT", 00:21:31.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.499 "adrfam": "ipv4", 00:21:31.499 "trsvcid": "$NVMF_PORT", 00:21:31.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.499 "hdgst": ${hdgst:-false}, 00:21:31.499 "ddgst": ${ddgst:-false} 00:21:31.499 }, 00:21:31.499 "method": "bdev_nvme_attach_controller" 00:21:31.499 } 00:21:31.499 EOF 00:21:31.499 )") 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.499 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.499 { 00:21:31.499 "params": { 00:21:31.499 "name": "Nvme$subsystem", 00:21:31.499 "trtype": "$TEST_TRANSPORT", 00:21:31.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.499 "adrfam": "ipv4", 00:21:31.499 "trsvcid": "$NVMF_PORT", 00:21:31.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.499 "hdgst": ${hdgst:-false}, 00:21:31.499 "ddgst": ${ddgst:-false} 00:21:31.500 }, 00:21:31.500 "method": "bdev_nvme_attach_controller" 00:21:31.500 } 00:21:31.500 EOF 00:21:31.500 )") 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.500 { 00:21:31.500 "params": { 00:21:31.500 "name": "Nvme$subsystem", 00:21:31.500 "trtype": "$TEST_TRANSPORT", 00:21:31.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.500 "adrfam": "ipv4", 00:21:31.500 "trsvcid": "$NVMF_PORT", 00:21:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.500 "hdgst": ${hdgst:-false}, 00:21:31.500 "ddgst": ${ddgst:-false} 00:21:31.500 }, 00:21:31.500 "method": "bdev_nvme_attach_controller" 00:21:31.500 } 00:21:31.500 EOF 00:21:31.500 )") 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.500 { 00:21:31.500 "params": { 00:21:31.500 "name": "Nvme$subsystem", 00:21:31.500 "trtype": "$TEST_TRANSPORT", 00:21:31.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.500 "adrfam": "ipv4", 00:21:31.500 "trsvcid": "$NVMF_PORT", 00:21:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.500 "hdgst": ${hdgst:-false}, 00:21:31.500 "ddgst": ${ddgst:-false} 00:21:31.500 }, 00:21:31.500 "method": "bdev_nvme_attach_controller" 00:21:31.500 } 00:21:31.500 EOF 00:21:31.500 )") 00:21:31.500 [2024-07-24 21:46:39.539953] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:21:31.500 [2024-07-24 21:46:39.540001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3119114 ] 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.500 { 00:21:31.500 "params": { 00:21:31.500 "name": "Nvme$subsystem", 00:21:31.500 "trtype": "$TEST_TRANSPORT", 00:21:31.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.500 "adrfam": "ipv4", 00:21:31.500 "trsvcid": "$NVMF_PORT", 00:21:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.500 "hdgst": ${hdgst:-false}, 00:21:31.500 "ddgst": ${ddgst:-false} 00:21:31.500 }, 00:21:31.500 "method": "bdev_nvme_attach_controller" 00:21:31.500 } 00:21:31.500 EOF 00:21:31.500 )") 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.500 { 00:21:31.500 "params": { 00:21:31.500 "name": "Nvme$subsystem", 00:21:31.500 "trtype": "$TEST_TRANSPORT", 00:21:31.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.500 "adrfam": "ipv4", 00:21:31.500 "trsvcid": "$NVMF_PORT", 00:21:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.500 "hdgst": ${hdgst:-false}, 00:21:31.500 "ddgst": ${ddgst:-false} 00:21:31.500 }, 00:21:31.500 "method": "bdev_nvme_attach_controller" 00:21:31.500 } 00:21:31.500 EOF 00:21:31.500 )") 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.500 { 00:21:31.500 "params": { 00:21:31.500 "name": "Nvme$subsystem", 00:21:31.500 "trtype": "$TEST_TRANSPORT", 00:21:31.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.500 "adrfam": "ipv4", 00:21:31.500 "trsvcid": "$NVMF_PORT", 00:21:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.500 "hdgst": ${hdgst:-false}, 00:21:31.500 "ddgst": ${ddgst:-false} 00:21:31.500 }, 00:21:31.500 "method": "bdev_nvme_attach_controller" 00:21:31.500 } 00:21:31.500 EOF 00:21:31.500 )") 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:31.500 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:31.500 21:46:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:31.500 "params": { 00:21:31.500 "name": "Nvme1", 00:21:31.500 "trtype": "tcp", 00:21:31.500 "traddr": "10.0.0.2", 00:21:31.500 "adrfam": "ipv4", 00:21:31.500 "trsvcid": "4420", 00:21:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:31.500 "hdgst": false, 00:21:31.500 "ddgst": false 00:21:31.500 }, 00:21:31.500 "method": "bdev_nvme_attach_controller" 00:21:31.500 },{ 00:21:31.500 "params": { 00:21:31.500 "name": "Nvme2", 00:21:31.500 "trtype": "tcp", 00:21:31.500 "traddr": "10.0.0.2", 00:21:31.500 "adrfam": "ipv4", 00:21:31.500 "trsvcid": "4420", 00:21:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:31.500 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:31.500 "hdgst": false, 00:21:31.500 "ddgst": false 00:21:31.500 }, 00:21:31.500 "method": "bdev_nvme_attach_controller" 00:21:31.500 },{ 00:21:31.500 "params": { 00:21:31.500 "name": "Nvme3", 00:21:31.500 "trtype": "tcp", 00:21:31.500 "traddr": "10.0.0.2", 00:21:31.500 "adrfam": "ipv4", 00:21:31.500 "trsvcid": "4420", 00:21:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:31.500 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:31.500 "hdgst": false, 00:21:31.500 "ddgst": false 00:21:31.500 }, 00:21:31.500 "method": "bdev_nvme_attach_controller" 00:21:31.500 },{ 00:21:31.500 "params": { 00:21:31.500 "name": "Nvme4", 00:21:31.500 "trtype": "tcp", 00:21:31.500 "traddr": "10.0.0.2", 00:21:31.500 "adrfam": "ipv4", 00:21:31.500 "trsvcid": "4420", 00:21:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:31.500 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:31.500 "hdgst": false, 00:21:31.500 "ddgst": false 00:21:31.500 }, 00:21:31.500 "method": "bdev_nvme_attach_controller" 00:21:31.500 },{ 00:21:31.500 "params": { 00:21:31.500 "name": "Nvme5", 00:21:31.500 "trtype": "tcp", 00:21:31.500 "traddr": "10.0.0.2", 00:21:31.500 "adrfam": "ipv4", 00:21:31.500 "trsvcid": "4420", 00:21:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:31.500 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:31.500 "hdgst": false, 00:21:31.500 "ddgst": false 00:21:31.500 }, 00:21:31.500 "method": "bdev_nvme_attach_controller" 00:21:31.500 },{ 00:21:31.500 "params": { 00:21:31.500 "name": "Nvme6", 00:21:31.500 "trtype": "tcp", 00:21:31.500 "traddr": "10.0.0.2", 00:21:31.500 "adrfam": "ipv4", 00:21:31.500 "trsvcid": "4420", 00:21:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:31.500 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:31.500 "hdgst": false, 00:21:31.500 "ddgst": false 00:21:31.500 }, 00:21:31.500 "method": "bdev_nvme_attach_controller" 00:21:31.500 },{ 00:21:31.500 "params": { 00:21:31.500 "name": "Nvme7", 00:21:31.500 "trtype": "tcp", 00:21:31.500 "traddr": "10.0.0.2", 00:21:31.500 "adrfam": "ipv4", 00:21:31.500 "trsvcid": "4420", 00:21:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:31.500 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:31.500 "hdgst": false, 00:21:31.500 "ddgst": false 00:21:31.500 }, 00:21:31.500 "method": "bdev_nvme_attach_controller" 00:21:31.500 },{ 00:21:31.500 "params": { 00:21:31.500 "name": "Nvme8", 00:21:31.500 "trtype": "tcp", 00:21:31.500 "traddr": "10.0.0.2", 00:21:31.500 "adrfam": "ipv4", 00:21:31.500 "trsvcid": "4420", 00:21:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:31.500 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:31.500 "hdgst": false, 00:21:31.500 "ddgst": false 00:21:31.500 }, 00:21:31.500 "method": "bdev_nvme_attach_controller" 00:21:31.500 },{ 00:21:31.500 "params": { 00:21:31.500 "name": "Nvme9", 00:21:31.500 "trtype": "tcp", 00:21:31.500 "traddr": "10.0.0.2", 00:21:31.501 "adrfam": "ipv4", 00:21:31.501 "trsvcid": "4420", 00:21:31.501 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:31.501 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:31.501 "hdgst": false, 00:21:31.501 "ddgst": false 00:21:31.501 }, 00:21:31.501 "method": "bdev_nvme_attach_controller" 00:21:31.501 },{ 00:21:31.501 "params": { 00:21:31.501 "name": "Nvme10", 00:21:31.501 "trtype": "tcp", 00:21:31.501 "traddr": "10.0.0.2", 00:21:31.501 "adrfam": "ipv4", 00:21:31.501 "trsvcid": "4420", 00:21:31.501 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:31.501 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:31.501 "hdgst": false, 00:21:31.501 "ddgst": false 00:21:31.501 }, 00:21:31.501 "method": "bdev_nvme_attach_controller" 00:21:31.501 }' 00:21:31.501 [2024-07-24 21:46:39.595808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.760 [2024-07-24 21:46:39.672658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.140 Running I/O for 10 seconds... 00:21:33.140 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.140 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:33.140 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:33.140 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.140 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.400 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.400 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:33.400 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:33.400 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:33.400 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:33.400 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:33.400 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:33.400 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:33.401 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:33.401 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.401 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.401 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:33.401 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.401 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:33.401 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:33.401 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3119114 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3119114 ']' 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3119114 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3119114 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3119114' 00:21:33.661 killing process with pid 3119114 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3119114 00:21:33.661 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3119114 00:21:33.922 Received shutdown signal, test time was about 0.641240 seconds 00:21:33.922 00:21:33.922 Latency(us) 00:21:33.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.922 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.922 Verification LBA range: start 0x0 length 0x400 00:21:33.922 Nvme1n1 : 0.61 315.30 19.71 0.00 0.00 197993.81 21085.50 200597.15 00:21:33.922 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.922 Verification LBA range: start 0x0 length 0x400 00:21:33.922 Nvme2n1 : 0.63 303.60 18.97 0.00 0.00 202043.21 23023.08 220656.86 00:21:33.922 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.922 Verification LBA range: start 0x0 length 0x400 00:21:33.922 Nvme3n1 : 0.62 307.97 19.25 0.00 0.00 193504.54 25188.62 217921.45 00:21:33.922 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.922 Verification LBA range: start 0x0 length 0x400 00:21:33.922 Nvme4n1 : 0.61 209.37 13.09 0.00 0.00 277076.59 37384.01 203332.56 00:21:33.922 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.922 Verification LBA range: start 0x0 length 0x400 00:21:33.922 Nvme5n1 : 0.63 304.03 19.00 0.00 0.00 185916.25 20173.69 216097.84 00:21:33.922 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.922 Verification LBA range: start 0x0 length 0x400 00:21:33.922 Nvme6n1 : 0.64 296.61 18.54 0.00 0.00 185049.97 15158.76 199685.34 00:21:33.922 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.922 Verification LBA range: start 0x0 length 0x400 00:21:33.922 Nvme7n1 : 0.61 208.49 13.03 0.00 0.00 254628.73 33508.84 271717.95 00:21:33.922 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.922 Verification LBA range: start 0x0 length 0x400 00:21:33.922 Nvme8n1 : 0.61 211.40 13.21 0.00 0.00 242632.79 24504.77 221568.67 00:21:33.922 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.922 Verification LBA range: start 0x0 length 0x400 00:21:33.922 Nvme9n1 : 0.62 206.52 12.91 0.00 0.00 241847.43 41943.04 246187.41 00:21:33.922 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.922 Verification LBA range: start 0x0 length 0x400 00:21:33.922 Nvme10n1 : 0.58 219.12 13.69 0.00 0.00 215756.80 21085.50 215186.03 00:21:33.922 =================================================================================================================== 00:21:33.922 Total : 2582.39 161.40 0.00 0.00 214332.93 15158.76 271717.95 00:21:33.922 21:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:35.304 21:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3118832 00:21:35.304 21:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:35.304 21:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:35.304 21:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:35.304 21:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:35.304 rmmod nvme_tcp 00:21:35.304 rmmod nvme_fabrics 00:21:35.304 rmmod nvme_keyring 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3118832 ']' 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3118832 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3118832 ']' 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3118832 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3118832 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3118832' 00:21:35.304 killing process with pid 3118832 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3118832 00:21:35.304 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3118832 00:21:35.564 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:35.564 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:35.564 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:35.564 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.564 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:35.564 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.564 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.564 21:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.474 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:37.474 00:21:37.474 real 0m7.840s 00:21:37.474 user 0m23.309s 00:21:37.474 sys 0m1.300s 00:21:37.474 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:37.474 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.474 ************************************ 00:21:37.474 END TEST nvmf_shutdown_tc2 00:21:37.474 ************************************ 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:37.735 ************************************ 00:21:37.735 START TEST nvmf_shutdown_tc3 00:21:37.735 ************************************ 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:37.735 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:37.736 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:37.736 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:37.736 Found net devices under 0000:86:00.0: cvl_0_0 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.736 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:37.737 Found net devices under 0000:86:00.1: cvl_0_1 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:37.737 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:37.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:21:37.997 00:21:37.997 --- 10.0.0.2 ping statistics --- 00:21:37.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.997 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:21:37.997 00:21:37.997 --- 10.0.0.1 ping statistics --- 00:21:37.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.997 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:37.997 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:37.998 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:37.998 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:37.998 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3120313 00:21:37.998 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3120313 00:21:37.998 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:37.998 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3120313 ']' 00:21:37.998 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.998 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.998 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.998 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.998 21:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:37.998 [2024-07-24 21:46:45.991949] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:21:37.998 [2024-07-24 21:46:45.991994] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.998 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.998 [2024-07-24 21:46:46.049487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.257 [2024-07-24 21:46:46.130342] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.257 [2024-07-24 21:46:46.130377] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.257 [2024-07-24 21:46:46.130384] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.257 [2024-07-24 21:46:46.130390] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.257 [2024-07-24 21:46:46.130395] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.257 [2024-07-24 21:46:46.130515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.257 [2024-07-24 21:46:46.130597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.257 [2024-07-24 21:46:46.130704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.257 [2024-07-24 21:46:46.130705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:38.827 [2024-07-24 21:46:46.833416] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.827 21:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:38.827 Malloc1 00:21:38.827 [2024-07-24 21:46:46.929303] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.087 Malloc2 00:21:39.087 Malloc3 00:21:39.087 Malloc4 00:21:39.087 Malloc5 00:21:39.087 Malloc6 00:21:39.087 Malloc7 00:21:39.347 Malloc8 00:21:39.347 Malloc9 00:21:39.347 Malloc10 00:21:39.347 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.347 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:39.347 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:39.347 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3120595 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3120595 /var/tmp/bdevperf.sock 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3120595 ']' 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.348 { 00:21:39.348 "params": { 00:21:39.348 "name": "Nvme$subsystem", 00:21:39.348 "trtype": "$TEST_TRANSPORT", 00:21:39.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.348 "adrfam": "ipv4", 00:21:39.348 "trsvcid": "$NVMF_PORT", 00:21:39.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.348 "hdgst": ${hdgst:-false}, 00:21:39.348 "ddgst": ${ddgst:-false} 00:21:39.348 }, 00:21:39.348 "method": "bdev_nvme_attach_controller" 00:21:39.348 } 00:21:39.348 EOF 00:21:39.348 )") 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.348 { 00:21:39.348 "params": { 00:21:39.348 "name": "Nvme$subsystem", 00:21:39.348 "trtype": "$TEST_TRANSPORT", 00:21:39.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.348 "adrfam": "ipv4", 00:21:39.348 "trsvcid": "$NVMF_PORT", 00:21:39.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.348 "hdgst": ${hdgst:-false}, 00:21:39.348 "ddgst": ${ddgst:-false} 00:21:39.348 }, 00:21:39.348 "method": "bdev_nvme_attach_controller" 00:21:39.348 } 00:21:39.348 EOF 00:21:39.348 )") 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.348 { 00:21:39.348 "params": { 00:21:39.348 "name": "Nvme$subsystem", 00:21:39.348 "trtype": "$TEST_TRANSPORT", 00:21:39.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.348 "adrfam": "ipv4", 00:21:39.348 "trsvcid": "$NVMF_PORT", 00:21:39.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.348 "hdgst": ${hdgst:-false}, 00:21:39.348 "ddgst": ${ddgst:-false} 00:21:39.348 }, 00:21:39.348 "method": "bdev_nvme_attach_controller" 00:21:39.348 } 00:21:39.348 EOF 00:21:39.348 )") 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.348 { 00:21:39.348 "params": { 00:21:39.348 "name": "Nvme$subsystem", 00:21:39.348 "trtype": "$TEST_TRANSPORT", 00:21:39.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.348 "adrfam": "ipv4", 00:21:39.348 "trsvcid": "$NVMF_PORT", 00:21:39.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.348 "hdgst": ${hdgst:-false}, 00:21:39.348 "ddgst": ${ddgst:-false} 00:21:39.348 }, 00:21:39.348 "method": "bdev_nvme_attach_controller" 00:21:39.348 } 00:21:39.348 EOF 00:21:39.348 )") 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.348 { 00:21:39.348 "params": { 00:21:39.348 "name": "Nvme$subsystem", 00:21:39.348 "trtype": "$TEST_TRANSPORT", 00:21:39.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.348 "adrfam": "ipv4", 00:21:39.348 "trsvcid": "$NVMF_PORT", 00:21:39.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.348 "hdgst": ${hdgst:-false}, 00:21:39.348 "ddgst": ${ddgst:-false} 00:21:39.348 }, 00:21:39.348 "method": "bdev_nvme_attach_controller" 00:21:39.348 } 00:21:39.348 EOF 00:21:39.348 )") 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.348 { 00:21:39.348 "params": { 00:21:39.348 "name": "Nvme$subsystem", 00:21:39.348 "trtype": "$TEST_TRANSPORT", 00:21:39.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.348 "adrfam": "ipv4", 00:21:39.348 "trsvcid": "$NVMF_PORT", 00:21:39.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.348 "hdgst": ${hdgst:-false}, 00:21:39.348 "ddgst": ${ddgst:-false} 00:21:39.348 }, 00:21:39.348 "method": "bdev_nvme_attach_controller" 00:21:39.348 } 00:21:39.348 EOF 00:21:39.348 )") 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.348 { 00:21:39.348 "params": { 00:21:39.348 "name": "Nvme$subsystem", 00:21:39.348 "trtype": "$TEST_TRANSPORT", 00:21:39.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.348 "adrfam": "ipv4", 00:21:39.348 "trsvcid": "$NVMF_PORT", 00:21:39.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.348 "hdgst": ${hdgst:-false}, 00:21:39.348 "ddgst": ${ddgst:-false} 00:21:39.348 }, 00:21:39.348 "method": "bdev_nvme_attach_controller" 00:21:39.348 } 00:21:39.348 EOF 00:21:39.348 )") 00:21:39.348 [2024-07-24 21:46:47.404376] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:21:39.348 [2024-07-24 21:46:47.404423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3120595 ] 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.348 { 00:21:39.348 "params": { 00:21:39.348 "name": "Nvme$subsystem", 00:21:39.348 "trtype": "$TEST_TRANSPORT", 00:21:39.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.348 "adrfam": "ipv4", 00:21:39.348 "trsvcid": "$NVMF_PORT", 00:21:39.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.348 "hdgst": ${hdgst:-false}, 00:21:39.348 "ddgst": ${ddgst:-false} 00:21:39.348 }, 00:21:39.348 "method": "bdev_nvme_attach_controller" 00:21:39.348 } 00:21:39.348 EOF 00:21:39.348 )") 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.348 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.348 { 00:21:39.348 "params": { 00:21:39.348 "name": "Nvme$subsystem", 00:21:39.348 "trtype": "$TEST_TRANSPORT", 00:21:39.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.348 "adrfam": "ipv4", 00:21:39.348 "trsvcid": "$NVMF_PORT", 00:21:39.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.348 "hdgst": ${hdgst:-false}, 00:21:39.349 "ddgst": ${ddgst:-false} 00:21:39.349 }, 00:21:39.349 "method": "bdev_nvme_attach_controller" 00:21:39.349 } 00:21:39.349 EOF 00:21:39.349 )") 00:21:39.349 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:39.349 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.349 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.349 { 00:21:39.349 "params": { 00:21:39.349 "name": "Nvme$subsystem", 00:21:39.349 "trtype": "$TEST_TRANSPORT", 00:21:39.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.349 "adrfam": "ipv4", 00:21:39.349 "trsvcid": "$NVMF_PORT", 00:21:39.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.349 "hdgst": ${hdgst:-false}, 00:21:39.349 "ddgst": ${ddgst:-false} 00:21:39.349 }, 00:21:39.349 "method": "bdev_nvme_attach_controller" 00:21:39.349 } 00:21:39.349 EOF 00:21:39.349 )") 00:21:39.349 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:39.349 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.349 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:39.349 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:39.349 21:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:39.349 "params": { 00:21:39.349 "name": "Nvme1", 00:21:39.349 "trtype": "tcp", 00:21:39.349 "traddr": "10.0.0.2", 00:21:39.349 "adrfam": "ipv4", 00:21:39.349 "trsvcid": "4420", 00:21:39.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.349 "hdgst": false, 00:21:39.349 "ddgst": false 00:21:39.349 }, 00:21:39.349 "method": "bdev_nvme_attach_controller" 00:21:39.349 },{ 00:21:39.349 "params": { 00:21:39.349 "name": "Nvme2", 00:21:39.349 "trtype": "tcp", 00:21:39.349 "traddr": "10.0.0.2", 00:21:39.349 "adrfam": "ipv4", 00:21:39.349 "trsvcid": "4420", 00:21:39.349 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:39.349 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:39.349 "hdgst": false, 00:21:39.349 "ddgst": false 00:21:39.349 }, 00:21:39.349 "method": "bdev_nvme_attach_controller" 00:21:39.349 },{ 00:21:39.349 "params": { 00:21:39.349 "name": "Nvme3", 00:21:39.349 "trtype": "tcp", 00:21:39.349 "traddr": "10.0.0.2", 00:21:39.349 "adrfam": "ipv4", 00:21:39.349 "trsvcid": "4420", 00:21:39.349 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:39.349 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:39.349 "hdgst": false, 00:21:39.349 "ddgst": false 00:21:39.349 }, 00:21:39.349 "method": "bdev_nvme_attach_controller" 00:21:39.349 },{ 00:21:39.349 "params": { 00:21:39.349 "name": "Nvme4", 00:21:39.349 "trtype": "tcp", 00:21:39.349 "traddr": "10.0.0.2", 00:21:39.349 "adrfam": "ipv4", 00:21:39.349 "trsvcid": "4420", 00:21:39.349 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:39.349 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:39.349 "hdgst": false, 00:21:39.349 "ddgst": false 00:21:39.349 }, 00:21:39.349 "method": "bdev_nvme_attach_controller" 00:21:39.349 },{ 00:21:39.349 "params": { 00:21:39.349 "name": "Nvme5", 00:21:39.349 "trtype": "tcp", 00:21:39.349 "traddr": "10.0.0.2", 00:21:39.349 "adrfam": "ipv4", 00:21:39.349 "trsvcid": "4420", 00:21:39.349 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:39.349 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:39.349 "hdgst": false, 00:21:39.349 "ddgst": false 00:21:39.349 }, 00:21:39.349 "method": "bdev_nvme_attach_controller" 00:21:39.349 },{ 00:21:39.349 "params": { 00:21:39.349 "name": "Nvme6", 00:21:39.349 "trtype": "tcp", 00:21:39.349 "traddr": "10.0.0.2", 00:21:39.349 "adrfam": "ipv4", 00:21:39.349 "trsvcid": "4420", 00:21:39.349 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:39.349 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:39.349 "hdgst": false, 00:21:39.349 "ddgst": false 00:21:39.349 }, 00:21:39.349 "method": "bdev_nvme_attach_controller" 00:21:39.349 },{ 00:21:39.349 "params": { 00:21:39.349 "name": "Nvme7", 00:21:39.349 "trtype": "tcp", 00:21:39.349 "traddr": "10.0.0.2", 00:21:39.349 "adrfam": "ipv4", 00:21:39.349 "trsvcid": "4420", 00:21:39.349 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:39.349 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:39.349 "hdgst": false, 00:21:39.349 "ddgst": false 00:21:39.349 }, 00:21:39.349 "method": "bdev_nvme_attach_controller" 00:21:39.349 },{ 00:21:39.349 "params": { 00:21:39.349 "name": "Nvme8", 00:21:39.349 "trtype": "tcp", 00:21:39.349 "traddr": "10.0.0.2", 00:21:39.349 "adrfam": "ipv4", 00:21:39.349 "trsvcid": "4420", 00:21:39.349 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:39.349 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:39.349 "hdgst": false, 00:21:39.349 "ddgst": false 00:21:39.349 }, 00:21:39.349 "method": "bdev_nvme_attach_controller" 00:21:39.349 },{ 00:21:39.349 "params": { 00:21:39.349 "name": "Nvme9", 00:21:39.349 "trtype": "tcp", 00:21:39.349 "traddr": "10.0.0.2", 00:21:39.349 "adrfam": "ipv4", 00:21:39.349 "trsvcid": "4420", 00:21:39.349 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:39.349 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:39.349 "hdgst": false, 00:21:39.349 "ddgst": false 00:21:39.349 }, 00:21:39.349 "method": "bdev_nvme_attach_controller" 00:21:39.349 },{ 00:21:39.349 "params": { 00:21:39.349 "name": "Nvme10", 00:21:39.349 "trtype": "tcp", 00:21:39.349 "traddr": "10.0.0.2", 00:21:39.349 "adrfam": "ipv4", 00:21:39.349 "trsvcid": "4420", 00:21:39.349 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:39.349 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:39.349 "hdgst": false, 00:21:39.349 "ddgst": false 00:21:39.349 }, 00:21:39.349 "method": "bdev_nvme_attach_controller" 00:21:39.349 }' 00:21:39.349 [2024-07-24 21:46:47.461983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.609 [2024-07-24 21:46:47.536334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.513 Running I/O for 10 seconds... 00:21:42.082 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.082 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:42.082 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:42.082 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.082 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.082 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.083 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:42.083 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:42.083 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:42.083 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:42.083 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:42.083 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:42.083 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:42.083 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:42.083 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:42.083 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:42.083 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.083 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.083 21:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.083 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:42.083 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:42.083 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3120313 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3120313 ']' 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3120313 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3120313 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3120313' 00:21:42.358 killing process with pid 3120313 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3120313 00:21:42.358 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3120313 00:21:42.358 [2024-07-24 21:46:50.360597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.360994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.361001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.361006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.361012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.358 [2024-07-24 21:46:50.361019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.361025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.361031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2db00 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.361955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.361982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.361991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.361997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.362384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2dfe0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e4a0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e4a0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e4a0 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.359 [2024-07-24 21:46:50.363784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.363997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.364003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.364010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.364016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.364023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.364030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.364036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.364047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.364053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.364059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.364066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.364072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.364078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.364083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2e980 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.360 [2024-07-24 21:46:50.365900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.365994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a230 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.361 [2024-07-24 21:46:50.366995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.362 [2024-07-24 21:46:50.367001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a6f0 is same with the state(5) to be set 00:21:42.362 [2024-07-24 21:46:50.369041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.362 [2024-07-24 21:46:50.369532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.362 [2024-07-24 21:46:50.369538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.369992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.369998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.370006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.370012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.370020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.363 [2024-07-24 21:46:50.370027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.370058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.363 [2024-07-24 21:46:50.370524] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1089df0 was disconnected and freed. reset controller. 00:21:42.363 [2024-07-24 21:46:50.370585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.363 [2024-07-24 21:46:50.370595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.370605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.363 [2024-07-24 21:46:50.370612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.370619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.363 [2024-07-24 21:46:50.370626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.370633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.363 [2024-07-24 21:46:50.370640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.363 [2024-07-24 21:46:50.370646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf5340 is same with the state(5) to be set 00:21:42.363 [2024-07-24 21:46:50.370672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.363 [2024-07-24 21:46:50.370680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172910 is same with the state(5) to be set 00:21:42.364 [2024-07-24 21:46:50.370751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1169bc0 is same with the state(5) to be set 00:21:42.364 [2024-07-24 21:46:50.370835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa6c70 is same with the state(5) to be set 00:21:42.364 [2024-07-24 21:46:50.370916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.370966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.370973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd2f30 is same with the state(5) to be set 00:21:42.364 [2024-07-24 21:46:50.370994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e700 is same with the state(5) to be set 00:21:42.364 [2024-07-24 21:46:50.371086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162f60 is same with the state(5) to be set 00:21:42.364 [2024-07-24 21:46:50.371169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd6b90 is same with the state(5) to be set 00:21:42.364 [2024-07-24 21:46:50.371248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140840 is same with the state(5) to be set 00:21:42.364 [2024-07-24 21:46:50.371328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.364 [2024-07-24 21:46:50.371378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.364 [2024-07-24 21:46:50.371385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fab0 is same with the state(5) to be set 00:21:42.364 [2024-07-24 21:46:50.371454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.371888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.371896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.387984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.388015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.388026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.388037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.388051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.388067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.388076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.388087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.388096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.388109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.388119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.388131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.388140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.388151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.388160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.388172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.365 [2024-07-24 21:46:50.388180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.365 [2024-07-24 21:46:50.388191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388831] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1069610 was disconnected and freed. reset controller. 00:21:42.366 [2024-07-24 21:46:50.388878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.366 [2024-07-24 21:46:50.388979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.366 [2024-07-24 21:46:50.388994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.367 [2024-07-24 21:46:50.389699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.367 [2024-07-24 21:46:50.389711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.389721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.389732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.389742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.389753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.389763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.389774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.389784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.389796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.389807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.389818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.389828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.389840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.389849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.389861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.389870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.389882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.389892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.389903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.389913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.389925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.389935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.389947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.389959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.389971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.389980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.389991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390342] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x106a9a0 was disconnected and freed. reset controller. 00:21:42.368 [2024-07-24 21:46:50.390585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.368 [2024-07-24 21:46:50.390691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.368 [2024-07-24 21:46:50.390701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.390713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.390723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.390735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.390744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.390756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.390766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.390777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.390786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.390798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.390807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.390819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.390828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.390840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.390849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.390860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.390870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.390881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.390891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.390903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.390912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.390923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.390934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.390946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.390955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.390968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.390978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.390989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.390999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.369 [2024-07-24 21:46:50.391528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.369 [2024-07-24 21:46:50.391538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.391953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.391962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.396923] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfa1200 was disconnected and freed. reset controller. 00:21:42.370 [2024-07-24 21:46:50.397120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.397136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.397151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.397165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.397178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.397188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.397200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.397210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.397222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.397232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.397244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.397253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.397265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.397275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.397286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.397295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.397307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.397317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.397328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.397337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.397349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.370 [2024-07-24 21:46:50.397359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.370 [2024-07-24 21:46:50.397370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.397979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.397989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.398002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.398012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.398023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.398033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.398050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.398060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.398072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.398083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.398095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.398104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.398116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.398125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.398137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.398147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.398159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.398168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.398181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.371 [2024-07-24 21:46:50.398191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.371 [2024-07-24 21:46:50.398204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398592] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18d4040 was disconnected and freed. reset controller. 00:21:42.372 [2024-07-24 21:46:50.398691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.372 [2024-07-24 21:46:50.398876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.372 [2024-07-24 21:46:50.398886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.398898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.398909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.398921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.398931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.398943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.398953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.398969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.398979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.398991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.373 [2024-07-24 21:46:50.399698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.373 [2024-07-24 21:46:50.399709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.399723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.399733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.399745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.399756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.399768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.399778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.399790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.399805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.399817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.399827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.399840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.399850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.399862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.399872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.399885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.399894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.399907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.399916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.399928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.399938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.399950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.399961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.399972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.399982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.399994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.400003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.400016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.400025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.400037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.400054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.400066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.400075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.400089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.400099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.400111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.400130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.400219] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1088910 was disconnected and freed. reset controller. 00:21:42.374 [2024-07-24 21:46:50.401547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf5340 (9): Bad file descriptor 00:21:42.374 [2024-07-24 21:46:50.401574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1172910 (9): Bad file descriptor 00:21:42.374 [2024-07-24 21:46:50.401591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1169bc0 (9): Bad file descriptor 00:21:42.374 [2024-07-24 21:46:50.401608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa6c70 (9): Bad file descriptor 00:21:42.374 [2024-07-24 21:46:50.401625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd2f30 (9): Bad file descriptor 00:21:42.374 [2024-07-24 21:46:50.401645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114e700 (9): Bad file descriptor 00:21:42.374 [2024-07-24 21:46:50.401662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1162f60 (9): Bad file descriptor 00:21:42.374 [2024-07-24 21:46:50.401678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd6b90 (9): Bad file descriptor 00:21:42.374 [2024-07-24 21:46:50.401699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1140840 (9): Bad file descriptor 00:21:42.374 [2024-07-24 21:46:50.401715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113fab0 (9): Bad file descriptor 00:21:42.374 [2024-07-24 21:46:50.408383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:42.374 [2024-07-24 21:46:50.408982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:42.374 [2024-07-24 21:46:50.409008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:42.374 [2024-07-24 21:46:50.409019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:42.374 [2024-07-24 21:46:50.409552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.374 [2024-07-24 21:46:50.409568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x113fab0 with addr=10.0.0.2, port=4420 00:21:42.374 [2024-07-24 21:46:50.409578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fab0 is same with the state(5) to be set 00:21:42.374 [2024-07-24 21:46:50.410062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.410076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.410088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.410097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.410106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.410115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.410129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.410137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.410146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.410154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.410163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.410170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.410179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.410187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.410196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.410203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.410218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.410226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.410234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.410241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.410250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.410257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.410266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.374 [2024-07-24 21:46:50.410274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.374 [2024-07-24 21:46:50.410282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.375 [2024-07-24 21:46:50.410899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.375 [2024-07-24 21:46:50.410906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.410914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.410921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.410931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.410938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.410945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.410953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.410961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.410968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.410977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.410984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.410993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.411000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.411009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.411015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.411024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.411031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.411040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.411052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.411061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.411068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.411076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.411083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.411092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.411099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.411107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e580 is same with the state(5) to be set 00:21:42.376 [2024-07-24 21:46:50.411163] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x110e580 was disconnected and freed. reset controller. 00:21:42.376 [2024-07-24 21:46:50.411213] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:42.376 [2024-07-24 21:46:50.411491] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:42.376 [2024-07-24 21:46:50.411760] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:42.376 [2024-07-24 21:46:50.412002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:42.376 [2024-07-24 21:46:50.412015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:42.376 [2024-07-24 21:46:50.412487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.376 [2024-07-24 21:46:50.412502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa6c70 with addr=10.0.0.2, port=4420 00:21:42.376 [2024-07-24 21:46:50.412510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa6c70 is same with the state(5) to be set 00:21:42.376 [2024-07-24 21:46:50.412993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.376 [2024-07-24 21:46:50.413003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1172910 with addr=10.0.0.2, port=4420 00:21:42.376 [2024-07-24 21:46:50.413010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172910 is same with the state(5) to be set 00:21:42.376 [2024-07-24 21:46:50.413335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.376 [2024-07-24 21:46:50.413346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf5340 with addr=10.0.0.2, port=4420 00:21:42.376 [2024-07-24 21:46:50.413354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf5340 is same with the state(5) to be set 00:21:42.376 [2024-07-24 21:46:50.413364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113fab0 (9): Bad file descriptor 00:21:42.376 [2024-07-24 21:46:50.413380] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.376 [2024-07-24 21:46:50.414424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:42.376 [2024-07-24 21:46:50.415149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.376 [2024-07-24 21:46:50.415165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1169bc0 with addr=10.0.0.2, port=4420 00:21:42.376 [2024-07-24 21:46:50.415174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1169bc0 is same with the state(5) to be set 00:21:42.376 [2024-07-24 21:46:50.415865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.376 [2024-07-24 21:46:50.415877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1140840 with addr=10.0.0.2, port=4420 00:21:42.376 [2024-07-24 21:46:50.415886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140840 is same with the state(5) to be set 00:21:42.376 [2024-07-24 21:46:50.415896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa6c70 (9): Bad file descriptor 00:21:42.376 [2024-07-24 21:46:50.415905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1172910 (9): Bad file descriptor 00:21:42.376 [2024-07-24 21:46:50.415915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf5340 (9): Bad file descriptor 00:21:42.376 [2024-07-24 21:46:50.415923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:42.376 [2024-07-24 21:46:50.415930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:42.376 [2024-07-24 21:46:50.415939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:42.376 [2024-07-24 21:46:50.416029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.416040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.416061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.416069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.416078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.416086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.416094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.416101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.416111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.416118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.416126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.416134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.416143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.416150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.416159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.416165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.416174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.416181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.416190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.416197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.416206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.416212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.416222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.416229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.416237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.376 [2024-07-24 21:46:50.416244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.376 [2024-07-24 21:46:50.416253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.377 [2024-07-24 21:46:50.416680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.377 [2024-07-24 21:46:50.416689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.416991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.416998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.417006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.417013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.417021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.417028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.417037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110fac0 is same with the state(5) to be set 00:21:42.378 [2024-07-24 21:46:50.418074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.378 [2024-07-24 21:46:50.418327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.378 [2024-07-24 21:46:50.418336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.379 [2024-07-24 21:46:50.418942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.379 [2024-07-24 21:46:50.418949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.418957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.418964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.418972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.418979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.418987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.418994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.419003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.419009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.419018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.419024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.419033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.419040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.419055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.419063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.419072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.419079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.419087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa26b0 is same with the state(5) to be set 00:21:42.380 [2024-07-24 21:46:50.420108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.380 [2024-07-24 21:46:50.420592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.380 [2024-07-24 21:46:50.420598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.420985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.420992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.421000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.421007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.421016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.421023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.421032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.421038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.421051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.421058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.421068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.421075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.421084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.421091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.421100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.421106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.421115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.381 [2024-07-24 21:46:50.421122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.381 [2024-07-24 21:46:50.421129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7bab0 is same with the state(5) to be set 00:21:42.381 [2024-07-24 21:46:50.422972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.382 [2024-07-24 21:46:50.422995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:42.382 [2024-07-24 21:46:50.423006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:42.382 task offset: 32768 on job bdev=Nvme10n1 fails 00:21:42.382 00:21:42.382 Latency(us) 00:21:42.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.382 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.382 Job: Nvme1n1 ended in about 1.09 seconds with error 00:21:42.382 Verification LBA range: start 0x0 length 0x400 00:21:42.382 Nvme1n1 : 1.09 176.76 11.05 58.92 0.00 269307.77 21313.45 262599.90 00:21:42.382 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.382 Job: Nvme2n1 ended in about 1.09 seconds with error 00:21:42.382 Verification LBA range: start 0x0 length 0x400 00:21:42.382 Nvme2n1 : 1.09 176.55 11.03 58.85 0.00 265638.51 29633.67 275365.18 00:21:42.382 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.382 Job: Nvme3n1 ended in about 1.10 seconds with error 00:21:42.382 Verification LBA range: start 0x0 length 0x400 00:21:42.382 Nvme3n1 : 1.10 233.18 14.57 58.30 0.00 211340.78 20059.71 222480.47 00:21:42.382 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.382 Job: Nvme4n1 ended in about 1.10 seconds with error 00:21:42.382 Verification LBA range: start 0x0 length 0x400 00:21:42.382 Nvme4n1 : 1.10 174.31 10.89 58.10 0.00 261222.85 20173.69 258952.68 00:21:42.382 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.382 Job: Nvme5n1 ended in about 1.09 seconds with error 00:21:42.382 Verification LBA range: start 0x0 length 0x400 00:21:42.382 Nvme5n1 : 1.09 235.10 14.69 58.78 0.00 203222.86 20515.62 227951.30 00:21:42.382 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.382 Job: Nvme6n1 ended in about 1.10 seconds with error 00:21:42.382 Verification LBA range: start 0x0 length 0x400 00:21:42.382 Nvme6n1 : 1.10 173.99 10.87 58.00 0.00 253739.41 21541.40 235245.75 00:21:42.382 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.382 Job: Nvme7n1 ended in about 1.09 seconds with error 00:21:42.382 Verification LBA range: start 0x0 length 0x400 00:21:42.382 Nvme7n1 : 1.09 234.82 14.68 58.70 0.00 197104.51 20743.57 220656.86 00:21:42.382 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.382 Job: Nvme8n1 ended in about 1.11 seconds with error 00:21:42.382 Verification LBA range: start 0x0 length 0x400 00:21:42.382 Nvme8n1 : 1.11 173.67 10.85 57.89 0.00 246332.33 22681.15 264423.51 00:21:42.382 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.382 Job: Nvme9n1 ended in about 1.09 seconds with error 00:21:42.382 Verification LBA range: start 0x0 length 0x400 00:21:42.382 Nvme9n1 : 1.09 175.90 10.99 58.63 0.00 238935.71 36016.31 223392.28 00:21:42.382 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.382 Job: Nvme10n1 ended in about 1.08 seconds with error 00:21:42.382 Verification LBA range: start 0x0 length 0x400 00:21:42.382 Nvme10n1 : 1.08 236.00 14.75 59.00 0.00 186646.17 22225.25 206979.78 00:21:42.382 =================================================================================================================== 00:21:42.382 Total : 1990.29 124.39 585.17 0.00 230279.04 20059.71 275365.18 00:21:42.382 [2024-07-24 21:46:50.447222] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:42.382 [2024-07-24 21:46:50.447263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:42.382 [2024-07-24 21:46:50.447836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.382 [2024-07-24 21:46:50.447856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd2f30 with addr=10.0.0.2, port=4420 00:21:42.382 [2024-07-24 21:46:50.447867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd2f30 is same with the state(5) to be set 00:21:42.382 [2024-07-24 21:46:50.447883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1169bc0 (9): Bad file descriptor 00:21:42.382 [2024-07-24 21:46:50.447894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1140840 (9): Bad file descriptor 00:21:42.382 [2024-07-24 21:46:50.447904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:42.382 [2024-07-24 21:46:50.447911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:42.382 [2024-07-24 21:46:50.447920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:42.382 [2024-07-24 21:46:50.447935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:42.382 [2024-07-24 21:46:50.447942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:42.382 [2024-07-24 21:46:50.447949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:42.382 [2024-07-24 21:46:50.447959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:42.382 [2024-07-24 21:46:50.447965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:42.382 [2024-07-24 21:46:50.447972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:42.382 [2024-07-24 21:46:50.448008] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.382 [2024-07-24 21:46:50.448022] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.382 [2024-07-24 21:46:50.448031] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.382 [2024-07-24 21:46:50.448051] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.382 [2024-07-24 21:46:50.448061] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.382 [2024-07-24 21:46:50.448595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.382 [2024-07-24 21:46:50.448615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.382 [2024-07-24 21:46:50.448621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.382 [2024-07-24 21:46:50.449120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.382 [2024-07-24 21:46:50.449137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd6b90 with addr=10.0.0.2, port=4420 00:21:42.382 [2024-07-24 21:46:50.449146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd6b90 is same with the state(5) to be set 00:21:42.382 [2024-07-24 21:46:50.449623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.382 [2024-07-24 21:46:50.449635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114e700 with addr=10.0.0.2, port=4420 00:21:42.382 [2024-07-24 21:46:50.449642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114e700 is same with the state(5) to be set 00:21:42.382 [2024-07-24 21:46:50.450090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.382 [2024-07-24 21:46:50.450103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1162f60 with addr=10.0.0.2, port=4420 00:21:42.382 [2024-07-24 21:46:50.450111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162f60 is same with the state(5) to be set 00:21:42.382 [2024-07-24 21:46:50.450122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd2f30 (9): Bad file descriptor 00:21:42.382 [2024-07-24 21:46:50.450132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:42.382 [2024-07-24 21:46:50.450140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:42.382 [2024-07-24 21:46:50.450148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:42.382 [2024-07-24 21:46:50.450160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:42.382 [2024-07-24 21:46:50.450167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:42.382 [2024-07-24 21:46:50.450174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:42.382 [2024-07-24 21:46:50.450188] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.382 [2024-07-24 21:46:50.450208] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.382 [2024-07-24 21:46:50.450233] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.382 [2024-07-24 21:46:50.450243] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.382 [2024-07-24 21:46:50.450936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:42.382 [2024-07-24 21:46:50.450963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.382 [2024-07-24 21:46:50.450971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.382 [2024-07-24 21:46:50.450995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd6b90 (9): Bad file descriptor 00:21:42.382 [2024-07-24 21:46:50.451005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114e700 (9): Bad file descriptor 00:21:42.382 [2024-07-24 21:46:50.451014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1162f60 (9): Bad file descriptor 00:21:42.382 [2024-07-24 21:46:50.451022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:42.382 [2024-07-24 21:46:50.451029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:42.382 [2024-07-24 21:46:50.451040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:42.382 [2024-07-24 21:46:50.451105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:42.382 [2024-07-24 21:46:50.451116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:42.382 [2024-07-24 21:46:50.451125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:42.382 [2024-07-24 21:46:50.451132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.382 [2024-07-24 21:46:50.451669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.382 [2024-07-24 21:46:50.451684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x113fab0 with addr=10.0.0.2, port=4420 00:21:42.382 [2024-07-24 21:46:50.451693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113fab0 is same with the state(5) to be set 00:21:42.383 [2024-07-24 21:46:50.451700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:42.383 [2024-07-24 21:46:50.451706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:42.383 [2024-07-24 21:46:50.451713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:42.383 [2024-07-24 21:46:50.451722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:42.383 [2024-07-24 21:46:50.451729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:42.383 [2024-07-24 21:46:50.451736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:42.383 [2024-07-24 21:46:50.451745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:42.383 [2024-07-24 21:46:50.451751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:42.383 [2024-07-24 21:46:50.451757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:42.383 [2024-07-24 21:46:50.451801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.383 [2024-07-24 21:46:50.451809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.383 [2024-07-24 21:46:50.451815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.383 [2024-07-24 21:46:50.452262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.383 [2024-07-24 21:46:50.452277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf5340 with addr=10.0.0.2, port=4420 00:21:42.383 [2024-07-24 21:46:50.452285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf5340 is same with the state(5) to be set 00:21:42.383 [2024-07-24 21:46:50.452440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.383 [2024-07-24 21:46:50.452452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1172910 with addr=10.0.0.2, port=4420 00:21:42.383 [2024-07-24 21:46:50.452459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172910 is same with the state(5) to be set 00:21:42.383 [2024-07-24 21:46:50.452903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.383 [2024-07-24 21:46:50.452917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa6c70 with addr=10.0.0.2, port=4420 00:21:42.383 [2024-07-24 21:46:50.452924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa6c70 is same with the state(5) to be set 00:21:42.383 [2024-07-24 21:46:50.452935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113fab0 (9): Bad file descriptor 00:21:42.383 [2024-07-24 21:46:50.452963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf5340 (9): Bad file descriptor 00:21:42.383 [2024-07-24 21:46:50.452977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1172910 (9): Bad file descriptor 00:21:42.383 [2024-07-24 21:46:50.452985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa6c70 (9): Bad file descriptor 00:21:42.383 [2024-07-24 21:46:50.452993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:42.383 [2024-07-24 21:46:50.453000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:42.383 [2024-07-24 21:46:50.453007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:42.383 [2024-07-24 21:46:50.453040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.383 [2024-07-24 21:46:50.453054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:42.383 [2024-07-24 21:46:50.453060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:42.383 [2024-07-24 21:46:50.453067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:42.383 [2024-07-24 21:46:50.453075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:42.383 [2024-07-24 21:46:50.453082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:42.383 [2024-07-24 21:46:50.453088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:42.383 [2024-07-24 21:46:50.453096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:42.383 [2024-07-24 21:46:50.453102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:42.383 [2024-07-24 21:46:50.453109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:42.383 [2024-07-24 21:46:50.453134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.383 [2024-07-24 21:46:50.453142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.383 [2024-07-24 21:46:50.453148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.954 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:42.954 21:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3120595 00:21:43.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3120595) - No such process 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:43.895 rmmod nvme_tcp 00:21:43.895 rmmod nvme_fabrics 00:21:43.895 rmmod nvme_keyring 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.895 21:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.832 21:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.832 00:21:45.832 real 0m8.279s 00:21:45.832 user 0m21.574s 00:21:45.832 sys 0m1.478s 00:21:45.832 21:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:45.832 21:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.832 ************************************ 00:21:45.832 END TEST nvmf_shutdown_tc3 00:21:45.833 ************************************ 00:21:45.833 21:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:45.833 00:21:45.833 real 0m31.174s 00:21:45.833 user 1m18.614s 00:21:45.833 sys 0m8.384s 00:21:45.833 21:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:45.833 21:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:45.833 ************************************ 00:21:45.833 END TEST nvmf_shutdown 00:21:45.833 ************************************ 00:21:46.093 21:46:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:21:46.093 00:21:46.093 real 10m39.219s 00:21:46.093 user 23m58.859s 00:21:46.093 sys 2m56.584s 00:21:46.093 21:46:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:46.093 21:46:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:46.093 ************************************ 00:21:46.093 END TEST nvmf_target_extra 00:21:46.093 ************************************ 00:21:46.093 21:46:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:46.093 21:46:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:46.093 21:46:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.093 21:46:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.093 ************************************ 00:21:46.093 START TEST nvmf_host 00:21:46.093 ************************************ 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:46.093 * Looking for test storage... 00:21:46.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.093 21:46:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:46.094 21:46:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:46.094 21:46:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:46.094 21:46:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:46.094 21:46:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:46.094 21:46:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.094 21:46:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.094 ************************************ 00:21:46.094 START TEST nvmf_multicontroller 00:21:46.094 ************************************ 00:21:46.094 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:46.459 * Looking for test storage... 00:21:46.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:46.459 21:46:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:51.745 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:51.745 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:51.745 Found net devices under 0000:86:00.0: cvl_0_0 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:51.745 Found net devices under 0000:86:00.1: cvl_0_1 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:51.745 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:51.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:21:51.746 00:21:51.746 --- 10.0.0.2 ping statistics --- 00:21:51.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.746 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:51.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:21:51.746 00:21:51.746 --- 10.0.0.1 ping statistics --- 00:21:51.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.746 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3124746 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3124746 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3124746 ']' 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.746 21:46:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:51.746 [2024-07-24 21:46:59.655499] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:21:51.746 [2024-07-24 21:46:59.655540] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.746 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.746 [2024-07-24 21:46:59.711517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:51.746 [2024-07-24 21:46:59.790635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.746 [2024-07-24 21:46:59.790670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.746 [2024-07-24 21:46:59.790677] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.746 [2024-07-24 21:46:59.790684] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.746 [2024-07-24 21:46:59.790689] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.746 [2024-07-24 21:46:59.790786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.746 [2024-07-24 21:46:59.790847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.746 [2024-07-24 21:46:59.790849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 [2024-07-24 21:47:00.510393] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 Malloc0 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.686 [2024-07-24 21:47:00.576874] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.686 [2024-07-24 21:47:00.584832] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.686 Malloc1 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3124990 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3124990 /var/tmp/bdevperf.sock 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3124990 ']' 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.686 21:47:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.623 NVMe0n1 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.623 1 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.623 request: 00:21:53.623 { 00:21:53.623 "name": "NVMe0", 00:21:53.623 "trtype": "tcp", 00:21:53.623 "traddr": "10.0.0.2", 00:21:53.623 "adrfam": "ipv4", 00:21:53.623 "trsvcid": "4420", 00:21:53.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.623 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:53.623 "hostaddr": "10.0.0.2", 00:21:53.623 "hostsvcid": "60000", 00:21:53.623 "prchk_reftag": false, 00:21:53.623 "prchk_guard": false, 00:21:53.623 "hdgst": false, 00:21:53.623 "ddgst": false, 00:21:53.623 "method": "bdev_nvme_attach_controller", 00:21:53.623 "req_id": 1 00:21:53.623 } 00:21:53.623 Got JSON-RPC error response 00:21:53.623 response: 00:21:53.623 { 00:21:53.623 "code": -114, 00:21:53.623 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:53.623 } 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.623 request: 00:21:53.623 { 00:21:53.623 "name": "NVMe0", 00:21:53.623 "trtype": "tcp", 00:21:53.623 "traddr": "10.0.0.2", 00:21:53.623 "adrfam": "ipv4", 00:21:53.623 "trsvcid": "4420", 00:21:53.623 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:53.623 "hostaddr": "10.0.0.2", 00:21:53.623 "hostsvcid": "60000", 00:21:53.623 "prchk_reftag": false, 00:21:53.623 "prchk_guard": false, 00:21:53.623 "hdgst": false, 00:21:53.623 "ddgst": false, 00:21:53.623 "method": "bdev_nvme_attach_controller", 00:21:53.623 "req_id": 1 00:21:53.623 } 00:21:53.623 Got JSON-RPC error response 00:21:53.623 response: 00:21:53.623 { 00:21:53.623 "code": -114, 00:21:53.623 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:53.623 } 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.623 request: 00:21:53.623 { 00:21:53.623 "name": "NVMe0", 00:21:53.623 "trtype": "tcp", 00:21:53.623 "traddr": "10.0.0.2", 00:21:53.623 "adrfam": "ipv4", 00:21:53.623 "trsvcid": "4420", 00:21:53.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.623 "hostaddr": "10.0.0.2", 00:21:53.623 "hostsvcid": "60000", 00:21:53.623 "prchk_reftag": false, 00:21:53.623 "prchk_guard": false, 00:21:53.623 "hdgst": false, 00:21:53.623 "ddgst": false, 00:21:53.623 "multipath": "disable", 00:21:53.623 "method": "bdev_nvme_attach_controller", 00:21:53.623 "req_id": 1 00:21:53.623 } 00:21:53.623 Got JSON-RPC error response 00:21:53.623 response: 00:21:53.623 { 00:21:53.623 "code": -114, 00:21:53.623 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:53.623 } 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:53.623 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.624 request: 00:21:53.624 { 00:21:53.624 "name": "NVMe0", 00:21:53.624 "trtype": "tcp", 00:21:53.624 "traddr": "10.0.0.2", 00:21:53.624 "adrfam": "ipv4", 00:21:53.624 "trsvcid": "4420", 00:21:53.624 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.624 "hostaddr": "10.0.0.2", 00:21:53.624 "hostsvcid": "60000", 00:21:53.624 "prchk_reftag": false, 00:21:53.624 "prchk_guard": false, 00:21:53.624 "hdgst": false, 00:21:53.624 "ddgst": false, 00:21:53.624 "multipath": "failover", 00:21:53.624 "method": "bdev_nvme_attach_controller", 00:21:53.624 "req_id": 1 00:21:53.624 } 00:21:53.624 Got JSON-RPC error response 00:21:53.624 response: 00:21:53.624 { 00:21:53.624 "code": -114, 00:21:53.624 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:53.624 } 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.624 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.883 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.883 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.883 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:53.884 21:47:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:55.263 0 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3124990 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3124990 ']' 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3124990 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3124990 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3124990' 00:21:55.263 killing process with pid 3124990 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3124990 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3124990 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # sort -u 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # cat 00:21:55.263 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:55.263 [2024-07-24 21:47:00.686733] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:21:55.263 [2024-07-24 21:47:00.686782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3124990 ] 00:21:55.263 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.263 [2024-07-24 21:47:00.740933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.263 [2024-07-24 21:47:00.815652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.263 [2024-07-24 21:47:01.912943] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 554bb20f-5b7e-4cf9-9042-f8b785d88f24 already exists 00:21:55.263 [2024-07-24 21:47:01.912972] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:554bb20f-5b7e-4cf9-9042-f8b785d88f24 alias for bdev NVMe1n1 00:21:55.263 [2024-07-24 21:47:01.912980] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:55.263 Running I/O for 1 seconds... 00:21:55.263 00:21:55.263 Latency(us) 00:21:55.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.263 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:55.263 NVMe0n1 : 1.01 23190.06 90.59 0.00 0.00 5502.02 3276.80 20971.52 00:21:55.263 =================================================================================================================== 00:21:55.263 Total : 23190.06 90.59 0.00 0.00 5502.02 3276.80 20971.52 00:21:55.263 Received shutdown signal, test time was about 1.000000 seconds 00:21:55.263 00:21:55.263 Latency(us) 00:21:55.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.263 =================================================================================================================== 00:21:55.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:55.263 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1616 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:55.263 rmmod nvme_tcp 00:21:55.263 rmmod nvme_fabrics 00:21:55.263 rmmod nvme_keyring 00:21:55.263 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:55.522 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:55.522 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:55.522 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3124746 ']' 00:21:55.522 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3124746 00:21:55.522 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3124746 ']' 00:21:55.522 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3124746 00:21:55.522 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:55.522 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:55.522 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3124746 00:21:55.522 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:55.522 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:55.522 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3124746' 00:21:55.522 killing process with pid 3124746 00:21:55.522 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3124746 00:21:55.522 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3124746 00:21:55.782 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:55.782 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:55.782 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:55.782 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.782 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.782 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.782 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.782 21:47:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.691 21:47:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:57.691 00:21:57.691 real 0m11.532s 00:21:57.691 user 0m16.016s 00:21:57.691 sys 0m4.678s 00:21:57.691 21:47:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:57.691 21:47:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.691 ************************************ 00:21:57.691 END TEST nvmf_multicontroller 00:21:57.691 ************************************ 00:21:57.691 21:47:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:57.691 21:47:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:57.691 21:47:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:57.691 21:47:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.691 ************************************ 00:21:57.691 START TEST nvmf_aer 00:21:57.691 ************************************ 00:21:57.691 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:57.951 * Looking for test storage... 00:21:57.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:57.951 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:57.952 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:57.952 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:57.952 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.952 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:57.952 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:57.952 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:57.952 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.952 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.952 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.952 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:57.952 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:57.952 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.952 21:47:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:03.233 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:03.233 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:03.233 Found net devices under 0000:86:00.0: cvl_0_0 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:03.233 Found net devices under 0000:86:00.1: cvl_0_1 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.233 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:03.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:22:03.233 00:22:03.233 --- 10.0.0.2 ping statistics --- 00:22:03.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.233 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:22:03.234 00:22:03.234 --- 10.0.0.1 ping statistics --- 00:22:03.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.234 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3128900 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3128900 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3128900 ']' 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.234 21:47:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:03.495 [2024-07-24 21:47:11.383949] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:22:03.495 [2024-07-24 21:47:11.383992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.495 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.495 [2024-07-24 21:47:11.441718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:03.495 [2024-07-24 21:47:11.519929] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.495 [2024-07-24 21:47:11.519972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.495 [2024-07-24 21:47:11.519979] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.495 [2024-07-24 21:47:11.519984] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.495 [2024-07-24 21:47:11.519989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.495 [2024-07-24 21:47:11.520051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.495 [2024-07-24 21:47:11.520144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.495 [2024-07-24 21:47:11.520231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.495 [2024-07-24 21:47:11.520232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.435 [2024-07-24 21:47:12.235302] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.435 Malloc0 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.435 [2024-07-24 21:47:12.287156] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.435 [ 00:22:04.435 { 00:22:04.435 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:04.435 "subtype": "Discovery", 00:22:04.435 "listen_addresses": [], 00:22:04.435 "allow_any_host": true, 00:22:04.435 "hosts": [] 00:22:04.435 }, 00:22:04.435 { 00:22:04.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.435 "subtype": "NVMe", 00:22:04.435 "listen_addresses": [ 00:22:04.435 { 00:22:04.435 "trtype": "TCP", 00:22:04.435 "adrfam": "IPv4", 00:22:04.435 "traddr": "10.0.0.2", 00:22:04.435 "trsvcid": "4420" 00:22:04.435 } 00:22:04.435 ], 00:22:04.435 "allow_any_host": true, 00:22:04.435 "hosts": [], 00:22:04.435 "serial_number": "SPDK00000000000001", 00:22:04.435 "model_number": "SPDK bdev Controller", 00:22:04.435 "max_namespaces": 2, 00:22:04.435 "min_cntlid": 1, 00:22:04.435 "max_cntlid": 65519, 00:22:04.435 "namespaces": [ 00:22:04.435 { 00:22:04.435 "nsid": 1, 00:22:04.435 "bdev_name": "Malloc0", 00:22:04.435 "name": "Malloc0", 00:22:04.435 "nguid": "C040D78D012840F2ADDEE61BEDC8106A", 00:22:04.435 "uuid": "c040d78d-0128-40f2-adde-e61bedc8106a" 00:22:04.435 } 00:22:04.435 ] 00:22:04.435 } 00:22:04.435 ] 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:04.435 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3129014 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1263 -- # local i=0 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 0 -lt 200 ']' 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=1 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:22:04.436 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 1 -lt 200 ']' 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=2 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # return 0 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.436 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.696 Malloc1 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.696 Asynchronous Event Request test 00:22:04.696 Attaching to 10.0.0.2 00:22:04.696 Attached to 10.0.0.2 00:22:04.696 Registering asynchronous event callbacks... 00:22:04.696 Starting namespace attribute notice tests for all controllers... 00:22:04.696 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:04.696 aer_cb - Changed Namespace 00:22:04.696 Cleaning up... 00:22:04.696 [ 00:22:04.696 { 00:22:04.696 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:04.696 "subtype": "Discovery", 00:22:04.696 "listen_addresses": [], 00:22:04.696 "allow_any_host": true, 00:22:04.696 "hosts": [] 00:22:04.696 }, 00:22:04.696 { 00:22:04.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.696 "subtype": "NVMe", 00:22:04.696 "listen_addresses": [ 00:22:04.696 { 00:22:04.696 "trtype": "TCP", 00:22:04.696 "adrfam": "IPv4", 00:22:04.696 "traddr": "10.0.0.2", 00:22:04.696 "trsvcid": "4420" 00:22:04.696 } 00:22:04.696 ], 00:22:04.696 "allow_any_host": true, 00:22:04.696 "hosts": [], 00:22:04.696 "serial_number": "SPDK00000000000001", 00:22:04.696 "model_number": "SPDK bdev Controller", 00:22:04.696 "max_namespaces": 2, 00:22:04.696 "min_cntlid": 1, 00:22:04.696 "max_cntlid": 65519, 00:22:04.696 "namespaces": [ 00:22:04.696 { 00:22:04.696 "nsid": 1, 00:22:04.696 "bdev_name": "Malloc0", 00:22:04.696 "name": "Malloc0", 00:22:04.696 "nguid": "C040D78D012840F2ADDEE61BEDC8106A", 00:22:04.696 "uuid": "c040d78d-0128-40f2-adde-e61bedc8106a" 00:22:04.696 }, 00:22:04.696 { 00:22:04.696 "nsid": 2, 00:22:04.696 "bdev_name": "Malloc1", 00:22:04.696 "name": "Malloc1", 00:22:04.696 "nguid": "4075D59F13F742EC9BEFAE42082CCAC8", 00:22:04.696 "uuid": "4075d59f-13f7-42ec-9bef-ae42082ccac8" 00:22:04.696 } 00:22:04.696 ] 00:22:04.696 } 00:22:04.696 ] 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3129014 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.696 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:04.697 rmmod nvme_tcp 00:22:04.697 rmmod nvme_fabrics 00:22:04.697 rmmod nvme_keyring 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3128900 ']' 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3128900 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3128900 ']' 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3128900 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3128900 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3128900' 00:22:04.697 killing process with pid 3128900 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3128900 00:22:04.697 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3128900 00:22:04.958 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:04.958 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:04.958 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:04.958 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:04.958 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:04.958 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.958 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.958 21:47:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:07.498 00:22:07.498 real 0m9.232s 00:22:07.498 user 0m7.253s 00:22:07.498 sys 0m4.485s 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:07.498 ************************************ 00:22:07.498 END TEST nvmf_aer 00:22:07.498 ************************************ 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.498 ************************************ 00:22:07.498 START TEST nvmf_async_init 00:22:07.498 ************************************ 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:07.498 * Looking for test storage... 00:22:07.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:07.498 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e853b3d69b7e4071807bb9cfd768cad4 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:07.499 21:47:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:12.848 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.848 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:12.848 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:12.848 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:12.849 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:12.849 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:12.849 Found net devices under 0000:86:00.0: cvl_0_0 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:12.849 Found net devices under 0000:86:00.1: cvl_0_1 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:12.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:22:12.849 00:22:12.849 --- 10.0.0.2 ping statistics --- 00:22:12.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.849 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:22:12.849 00:22:12.849 --- 10.0.0.1 ping statistics --- 00:22:12.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.849 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3132526 00:22:12.849 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3132526 00:22:12.850 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:12.850 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3132526 ']' 00:22:12.850 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.850 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.850 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.850 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.850 21:47:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:12.850 [2024-07-24 21:47:20.567512] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:22:12.850 [2024-07-24 21:47:20.567557] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.850 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.850 [2024-07-24 21:47:20.625357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.850 [2024-07-24 21:47:20.696843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.850 [2024-07-24 21:47:20.696886] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.850 [2024-07-24 21:47:20.696892] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.850 [2024-07-24 21:47:20.696899] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.850 [2024-07-24 21:47:20.696903] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.850 [2024-07-24 21:47:20.696921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.421 [2024-07-24 21:47:21.411564] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.421 null0 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e853b3d69b7e4071807bb9cfd768cad4 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.421 [2024-07-24 21:47:21.455794] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.421 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.681 nvme0n1 00:22:13.681 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.681 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:13.681 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.681 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.681 [ 00:22:13.681 { 00:22:13.681 "name": "nvme0n1", 00:22:13.681 "aliases": [ 00:22:13.681 "e853b3d6-9b7e-4071-807b-b9cfd768cad4" 00:22:13.681 ], 00:22:13.681 "product_name": "NVMe disk", 00:22:13.681 "block_size": 512, 00:22:13.681 "num_blocks": 2097152, 00:22:13.681 "uuid": "e853b3d6-9b7e-4071-807b-b9cfd768cad4", 00:22:13.681 "assigned_rate_limits": { 00:22:13.681 "rw_ios_per_sec": 0, 00:22:13.681 "rw_mbytes_per_sec": 0, 00:22:13.681 "r_mbytes_per_sec": 0, 00:22:13.681 "w_mbytes_per_sec": 0 00:22:13.682 }, 00:22:13.682 "claimed": false, 00:22:13.682 "zoned": false, 00:22:13.682 "supported_io_types": { 00:22:13.682 "read": true, 00:22:13.682 "write": true, 00:22:13.682 "unmap": false, 00:22:13.682 "flush": true, 00:22:13.682 "reset": true, 00:22:13.682 "nvme_admin": true, 00:22:13.682 "nvme_io": true, 00:22:13.682 "nvme_io_md": false, 00:22:13.682 "write_zeroes": true, 00:22:13.682 "zcopy": false, 00:22:13.682 "get_zone_info": false, 00:22:13.682 "zone_management": false, 00:22:13.682 "zone_append": false, 00:22:13.682 "compare": true, 00:22:13.682 "compare_and_write": true, 00:22:13.682 "abort": true, 00:22:13.682 "seek_hole": false, 00:22:13.682 "seek_data": false, 00:22:13.682 "copy": true, 00:22:13.682 "nvme_iov_md": false 00:22:13.682 }, 00:22:13.682 "memory_domains": [ 00:22:13.682 { 00:22:13.682 "dma_device_id": "system", 00:22:13.682 "dma_device_type": 1 00:22:13.682 } 00:22:13.682 ], 00:22:13.682 "driver_specific": { 00:22:13.682 "nvme": [ 00:22:13.682 { 00:22:13.682 "trid": { 00:22:13.682 "trtype": "TCP", 00:22:13.682 "adrfam": "IPv4", 00:22:13.682 "traddr": "10.0.0.2", 00:22:13.682 "trsvcid": "4420", 00:22:13.682 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:13.682 }, 00:22:13.682 "ctrlr_data": { 00:22:13.682 "cntlid": 1, 00:22:13.682 "vendor_id": "0x8086", 00:22:13.682 "model_number": "SPDK bdev Controller", 00:22:13.682 "serial_number": "00000000000000000000", 00:22:13.682 "firmware_revision": "24.09", 00:22:13.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.682 "oacs": { 00:22:13.682 "security": 0, 00:22:13.682 "format": 0, 00:22:13.682 "firmware": 0, 00:22:13.682 "ns_manage": 0 00:22:13.682 }, 00:22:13.682 "multi_ctrlr": true, 00:22:13.682 "ana_reporting": false 00:22:13.682 }, 00:22:13.682 "vs": { 00:22:13.682 "nvme_version": "1.3" 00:22:13.682 }, 00:22:13.682 "ns_data": { 00:22:13.682 "id": 1, 00:22:13.682 "can_share": true 00:22:13.682 } 00:22:13.682 } 00:22:13.682 ], 00:22:13.682 "mp_policy": "active_passive" 00:22:13.682 } 00:22:13.682 } 00:22:13.682 ] 00:22:13.682 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.682 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:13.682 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.682 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.682 [2024-07-24 21:47:21.717260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.682 [2024-07-24 21:47:21.717314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fe390 (9): Bad file descriptor 00:22:13.943 [2024-07-24 21:47:21.849121] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.943 [ 00:22:13.943 { 00:22:13.943 "name": "nvme0n1", 00:22:13.943 "aliases": [ 00:22:13.943 "e853b3d6-9b7e-4071-807b-b9cfd768cad4" 00:22:13.943 ], 00:22:13.943 "product_name": "NVMe disk", 00:22:13.943 "block_size": 512, 00:22:13.943 "num_blocks": 2097152, 00:22:13.943 "uuid": "e853b3d6-9b7e-4071-807b-b9cfd768cad4", 00:22:13.943 "assigned_rate_limits": { 00:22:13.943 "rw_ios_per_sec": 0, 00:22:13.943 "rw_mbytes_per_sec": 0, 00:22:13.943 "r_mbytes_per_sec": 0, 00:22:13.943 "w_mbytes_per_sec": 0 00:22:13.943 }, 00:22:13.943 "claimed": false, 00:22:13.943 "zoned": false, 00:22:13.943 "supported_io_types": { 00:22:13.943 "read": true, 00:22:13.943 "write": true, 00:22:13.943 "unmap": false, 00:22:13.943 "flush": true, 00:22:13.943 "reset": true, 00:22:13.943 "nvme_admin": true, 00:22:13.943 "nvme_io": true, 00:22:13.943 "nvme_io_md": false, 00:22:13.943 "write_zeroes": true, 00:22:13.943 "zcopy": false, 00:22:13.943 "get_zone_info": false, 00:22:13.943 "zone_management": false, 00:22:13.943 "zone_append": false, 00:22:13.943 "compare": true, 00:22:13.943 "compare_and_write": true, 00:22:13.943 "abort": true, 00:22:13.943 "seek_hole": false, 00:22:13.943 "seek_data": false, 00:22:13.943 "copy": true, 00:22:13.943 "nvme_iov_md": false 00:22:13.943 }, 00:22:13.943 "memory_domains": [ 00:22:13.943 { 00:22:13.943 "dma_device_id": "system", 00:22:13.943 "dma_device_type": 1 00:22:13.943 } 00:22:13.943 ], 00:22:13.943 "driver_specific": { 00:22:13.943 "nvme": [ 00:22:13.943 { 00:22:13.943 "trid": { 00:22:13.943 "trtype": "TCP", 00:22:13.943 "adrfam": "IPv4", 00:22:13.943 "traddr": "10.0.0.2", 00:22:13.943 "trsvcid": "4420", 00:22:13.943 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:13.943 }, 00:22:13.943 "ctrlr_data": { 00:22:13.943 "cntlid": 2, 00:22:13.943 "vendor_id": "0x8086", 00:22:13.943 "model_number": "SPDK bdev Controller", 00:22:13.943 "serial_number": "00000000000000000000", 00:22:13.943 "firmware_revision": "24.09", 00:22:13.943 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.943 "oacs": { 00:22:13.943 "security": 0, 00:22:13.943 "format": 0, 00:22:13.943 "firmware": 0, 00:22:13.943 "ns_manage": 0 00:22:13.943 }, 00:22:13.943 "multi_ctrlr": true, 00:22:13.943 "ana_reporting": false 00:22:13.943 }, 00:22:13.943 "vs": { 00:22:13.943 "nvme_version": "1.3" 00:22:13.943 }, 00:22:13.943 "ns_data": { 00:22:13.943 "id": 1, 00:22:13.943 "can_share": true 00:22:13.943 } 00:22:13.943 } 00:22:13.943 ], 00:22:13.943 "mp_policy": "active_passive" 00:22:13.943 } 00:22:13.943 } 00:22:13.943 ] 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.FnN7rFdDPQ 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.FnN7rFdDPQ 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.943 [2024-07-24 21:47:21.909841] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:13.943 [2024-07-24 21:47:21.909938] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FnN7rFdDPQ 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.943 [2024-07-24 21:47:21.917858] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:13.943 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.944 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FnN7rFdDPQ 00:22:13.944 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.944 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.944 [2024-07-24 21:47:21.929902] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.944 [2024-07-24 21:47:21.929936] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:13.944 nvme0n1 00:22:13.944 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.944 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:13.944 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.944 21:47:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.944 [ 00:22:13.944 { 00:22:13.944 "name": "nvme0n1", 00:22:13.944 "aliases": [ 00:22:13.944 "e853b3d6-9b7e-4071-807b-b9cfd768cad4" 00:22:13.944 ], 00:22:13.944 "product_name": "NVMe disk", 00:22:13.944 "block_size": 512, 00:22:13.944 "num_blocks": 2097152, 00:22:13.944 "uuid": "e853b3d6-9b7e-4071-807b-b9cfd768cad4", 00:22:13.944 "assigned_rate_limits": { 00:22:13.944 "rw_ios_per_sec": 0, 00:22:13.944 "rw_mbytes_per_sec": 0, 00:22:13.944 "r_mbytes_per_sec": 0, 00:22:13.944 "w_mbytes_per_sec": 0 00:22:13.944 }, 00:22:13.944 "claimed": false, 00:22:13.944 "zoned": false, 00:22:13.944 "supported_io_types": { 00:22:13.944 "read": true, 00:22:13.944 "write": true, 00:22:13.944 "unmap": false, 00:22:13.944 "flush": true, 00:22:13.944 "reset": true, 00:22:13.944 "nvme_admin": true, 00:22:13.944 "nvme_io": true, 00:22:13.944 "nvme_io_md": false, 00:22:13.944 "write_zeroes": true, 00:22:13.944 "zcopy": false, 00:22:13.944 "get_zone_info": false, 00:22:13.944 "zone_management": false, 00:22:13.944 "zone_append": false, 00:22:13.944 "compare": true, 00:22:13.944 "compare_and_write": true, 00:22:13.944 "abort": true, 00:22:13.944 "seek_hole": false, 00:22:13.944 "seek_data": false, 00:22:13.944 "copy": true, 00:22:13.944 "nvme_iov_md": false 00:22:13.944 }, 00:22:13.944 "memory_domains": [ 00:22:13.944 { 00:22:13.944 "dma_device_id": "system", 00:22:13.944 "dma_device_type": 1 00:22:13.944 } 00:22:13.944 ], 00:22:13.944 "driver_specific": { 00:22:13.944 "nvme": [ 00:22:13.944 { 00:22:13.944 "trid": { 00:22:13.944 "trtype": "TCP", 00:22:13.944 "adrfam": "IPv4", 00:22:13.944 "traddr": "10.0.0.2", 00:22:13.944 "trsvcid": "4421", 00:22:13.944 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:13.944 }, 00:22:13.944 "ctrlr_data": { 00:22:13.944 "cntlid": 3, 00:22:13.944 "vendor_id": "0x8086", 00:22:13.944 "model_number": "SPDK bdev Controller", 00:22:13.944 "serial_number": "00000000000000000000", 00:22:13.944 "firmware_revision": "24.09", 00:22:13.944 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.944 "oacs": { 00:22:13.944 "security": 0, 00:22:13.944 "format": 0, 00:22:13.944 "firmware": 0, 00:22:13.944 "ns_manage": 0 00:22:13.944 }, 00:22:13.944 "multi_ctrlr": true, 00:22:13.944 "ana_reporting": false 00:22:13.944 }, 00:22:13.944 "vs": { 00:22:13.944 "nvme_version": "1.3" 00:22:13.944 }, 00:22:13.944 "ns_data": { 00:22:13.944 "id": 1, 00:22:13.944 "can_share": true 00:22:13.944 } 00:22:13.944 } 00:22:13.944 ], 00:22:13.944 "mp_policy": "active_passive" 00:22:13.944 } 00:22:13.944 } 00:22:13.944 ] 00:22:13.944 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.944 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.944 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.944 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.944 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.944 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.FnN7rFdDPQ 00:22:13.944 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:13.944 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:13.944 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:13.944 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:13.944 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:13.944 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:13.944 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:13.944 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:13.944 rmmod nvme_tcp 00:22:14.205 rmmod nvme_fabrics 00:22:14.205 rmmod nvme_keyring 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3132526 ']' 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3132526 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3132526 ']' 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3132526 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3132526 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3132526' 00:22:14.205 killing process with pid 3132526 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3132526 00:22:14.205 [2024-07-24 21:47:22.138004] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:14.205 [2024-07-24 21:47:22.138028] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3132526 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.205 21:47:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:16.748 00:22:16.748 real 0m9.289s 00:22:16.748 user 0m3.530s 00:22:16.748 sys 0m4.305s 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:16.748 ************************************ 00:22:16.748 END TEST nvmf_async_init 00:22:16.748 ************************************ 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.748 ************************************ 00:22:16.748 START TEST dma 00:22:16.748 ************************************ 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:16.748 * Looking for test storage... 00:22:16.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.748 21:47:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:16.749 00:22:16.749 real 0m0.121s 00:22:16.749 user 0m0.066s 00:22:16.749 sys 0m0.063s 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:16.749 ************************************ 00:22:16.749 END TEST dma 00:22:16.749 ************************************ 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.749 ************************************ 00:22:16.749 START TEST nvmf_identify 00:22:16.749 ************************************ 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:16.749 * Looking for test storage... 00:22:16.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.749 21:47:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:22.031 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:22.031 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:22.031 Found net devices under 0000:86:00.0: cvl_0_0 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:22.031 Found net devices under 0000:86:00.1: cvl_0_1 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.031 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:22.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:22:22.032 00:22:22.032 --- 10.0.0.2 ping statistics --- 00:22:22.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.032 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:22:22.032 00:22:22.032 --- 10.0.0.1 ping statistics --- 00:22:22.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.032 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3136336 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3136336 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3136336 ']' 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.032 21:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:22.032 [2024-07-24 21:47:30.030017] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:22:22.032 [2024-07-24 21:47:30.030081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.032 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.032 [2024-07-24 21:47:30.089970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:22.291 [2024-07-24 21:47:30.171062] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.291 [2024-07-24 21:47:30.171097] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.291 [2024-07-24 21:47:30.171104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.291 [2024-07-24 21:47:30.171109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.291 [2024-07-24 21:47:30.171114] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.291 [2024-07-24 21:47:30.171209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.291 [2024-07-24 21:47:30.171319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.291 [2024-07-24 21:47:30.171403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:22.291 [2024-07-24 21:47:30.171404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.860 [2024-07-24 21:47:30.830095] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.860 Malloc0 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.860 [2024-07-24 21:47:30.913736] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.860 [ 00:22:22.860 { 00:22:22.860 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:22.860 "subtype": "Discovery", 00:22:22.860 "listen_addresses": [ 00:22:22.860 { 00:22:22.860 "trtype": "TCP", 00:22:22.860 "adrfam": "IPv4", 00:22:22.860 "traddr": "10.0.0.2", 00:22:22.860 "trsvcid": "4420" 00:22:22.860 } 00:22:22.860 ], 00:22:22.860 "allow_any_host": true, 00:22:22.860 "hosts": [] 00:22:22.860 }, 00:22:22.860 { 00:22:22.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.860 "subtype": "NVMe", 00:22:22.860 "listen_addresses": [ 00:22:22.860 { 00:22:22.860 "trtype": "TCP", 00:22:22.860 "adrfam": "IPv4", 00:22:22.860 "traddr": "10.0.0.2", 00:22:22.860 "trsvcid": "4420" 00:22:22.860 } 00:22:22.860 ], 00:22:22.860 "allow_any_host": true, 00:22:22.860 "hosts": [], 00:22:22.860 "serial_number": "SPDK00000000000001", 00:22:22.860 "model_number": "SPDK bdev Controller", 00:22:22.860 "max_namespaces": 32, 00:22:22.860 "min_cntlid": 1, 00:22:22.860 "max_cntlid": 65519, 00:22:22.860 "namespaces": [ 00:22:22.860 { 00:22:22.860 "nsid": 1, 00:22:22.860 "bdev_name": "Malloc0", 00:22:22.860 "name": "Malloc0", 00:22:22.860 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:22.860 "eui64": "ABCDEF0123456789", 00:22:22.860 "uuid": "81f58ecb-0f47-4291-844e-4741c14e4c86" 00:22:22.860 } 00:22:22.860 ] 00:22:22.860 } 00:22:22.860 ] 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.860 21:47:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:22.860 [2024-07-24 21:47:30.964442] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:22:22.860 [2024-07-24 21:47:30.964483] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3136401 ] 00:22:22.860 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.123 [2024-07-24 21:47:30.992675] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:23.123 [2024-07-24 21:47:30.992724] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:23.123 [2024-07-24 21:47:30.992729] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:23.123 [2024-07-24 21:47:30.992741] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:23.123 [2024-07-24 21:47:30.992748] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:23.123 [2024-07-24 21:47:30.993261] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:23.123 [2024-07-24 21:47:30.993289] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x77dec0 0 00:22:23.123 [2024-07-24 21:47:31.008053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:23.123 [2024-07-24 21:47:31.008090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:23.123 [2024-07-24 21:47:31.008096] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:23.123 [2024-07-24 21:47:31.008100] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:23.123 [2024-07-24 21:47:31.008138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.008143] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.008147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x77dec0) 00:22:23.123 [2024-07-24 21:47:31.008159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:23.123 [2024-07-24 21:47:31.008175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x800e40, cid 0, qid 0 00:22:23.123 [2024-07-24 21:47:31.016056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.123 [2024-07-24 21:47:31.016067] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.123 [2024-07-24 21:47:31.016071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.016075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x800e40) on tqpair=0x77dec0 00:22:23.123 [2024-07-24 21:47:31.016085] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:23.123 [2024-07-24 21:47:31.016091] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:23.123 [2024-07-24 21:47:31.016096] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:23.123 [2024-07-24 21:47:31.016109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.016113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.016116] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x77dec0) 00:22:23.123 [2024-07-24 21:47:31.016123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.123 [2024-07-24 21:47:31.016135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x800e40, cid 0, qid 0 00:22:23.123 [2024-07-24 21:47:31.016317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.123 [2024-07-24 21:47:31.016330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.123 [2024-07-24 21:47:31.016333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.016337] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x800e40) on tqpair=0x77dec0 00:22:23.123 [2024-07-24 21:47:31.016347] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:23.123 [2024-07-24 21:47:31.016356] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:23.123 [2024-07-24 21:47:31.016364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.016368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.016371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x77dec0) 00:22:23.123 [2024-07-24 21:47:31.016380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.123 [2024-07-24 21:47:31.016393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x800e40, cid 0, qid 0 00:22:23.123 [2024-07-24 21:47:31.016539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.123 [2024-07-24 21:47:31.016550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.123 [2024-07-24 21:47:31.016553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.016556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x800e40) on tqpair=0x77dec0 00:22:23.123 [2024-07-24 21:47:31.016563] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:23.123 [2024-07-24 21:47:31.016571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:23.123 [2024-07-24 21:47:31.016579] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.016583] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.016586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x77dec0) 00:22:23.123 [2024-07-24 21:47:31.016593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.123 [2024-07-24 21:47:31.016606] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x800e40, cid 0, qid 0 00:22:23.123 [2024-07-24 21:47:31.016750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.123 [2024-07-24 21:47:31.016763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.123 [2024-07-24 21:47:31.016766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.016769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x800e40) on tqpair=0x77dec0 00:22:23.123 [2024-07-24 21:47:31.016775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:23.123 [2024-07-24 21:47:31.016786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.016789] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.016792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x77dec0) 00:22:23.123 [2024-07-24 21:47:31.016800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.123 [2024-07-24 21:47:31.016813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x800e40, cid 0, qid 0 00:22:23.123 [2024-07-24 21:47:31.016954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.123 [2024-07-24 21:47:31.016963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.123 [2024-07-24 21:47:31.016967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.016970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x800e40) on tqpair=0x77dec0 00:22:23.123 [2024-07-24 21:47:31.016975] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:23.123 [2024-07-24 21:47:31.016980] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:23.123 [2024-07-24 21:47:31.016988] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:23.123 [2024-07-24 21:47:31.017093] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:23.123 [2024-07-24 21:47:31.017098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:23.123 [2024-07-24 21:47:31.017107] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.017111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.017114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x77dec0) 00:22:23.123 [2024-07-24 21:47:31.017122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.123 [2024-07-24 21:47:31.017136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x800e40, cid 0, qid 0 00:22:23.123 [2024-07-24 21:47:31.017278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.123 [2024-07-24 21:47:31.017288] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.123 [2024-07-24 21:47:31.017291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.017294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x800e40) on tqpair=0x77dec0 00:22:23.123 [2024-07-24 21:47:31.017299] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:23.123 [2024-07-24 21:47:31.017310] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.017314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.123 [2024-07-24 21:47:31.017317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x77dec0) 00:22:23.123 [2024-07-24 21:47:31.017324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.123 [2024-07-24 21:47:31.017337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x800e40, cid 0, qid 0 00:22:23.123 [2024-07-24 21:47:31.017477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.123 [2024-07-24 21:47:31.017487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.124 [2024-07-24 21:47:31.017490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.017493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x800e40) on tqpair=0x77dec0 00:22:23.124 [2024-07-24 21:47:31.017498] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:23.124 [2024-07-24 21:47:31.017502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:23.124 [2024-07-24 21:47:31.017511] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:23.124 [2024-07-24 21:47:31.017519] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:23.124 [2024-07-24 21:47:31.017529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.017532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x77dec0) 00:22:23.124 [2024-07-24 21:47:31.017540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.124 [2024-07-24 21:47:31.017553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x800e40, cid 0, qid 0 00:22:23.124 [2024-07-24 21:47:31.017730] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.124 [2024-07-24 21:47:31.017740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.124 [2024-07-24 21:47:31.017743] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.017746] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x77dec0): datao=0, datal=4096, cccid=0 00:22:23.124 [2024-07-24 21:47:31.017751] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x800e40) on tqpair(0x77dec0): expected_datao=0, payload_size=4096 00:22:23.124 [2024-07-24 21:47:31.017755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.017994] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.017999] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.124 [2024-07-24 21:47:31.059066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.124 [2024-07-24 21:47:31.059069] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x800e40) on tqpair=0x77dec0 00:22:23.124 [2024-07-24 21:47:31.059080] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:23.124 [2024-07-24 21:47:31.059084] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:23.124 [2024-07-24 21:47:31.059088] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:23.124 [2024-07-24 21:47:31.059093] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:23.124 [2024-07-24 21:47:31.059097] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:23.124 [2024-07-24 21:47:31.059101] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:23.124 [2024-07-24 21:47:31.059110] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:23.124 [2024-07-24 21:47:31.059120] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059129] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x77dec0) 00:22:23.124 [2024-07-24 21:47:31.059136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:23.124 [2024-07-24 21:47:31.059149] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x800e40, cid 0, qid 0 00:22:23.124 [2024-07-24 21:47:31.059300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.124 [2024-07-24 21:47:31.059310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.124 [2024-07-24 21:47:31.059313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x800e40) on tqpair=0x77dec0 00:22:23.124 [2024-07-24 21:47:31.059325] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x77dec0) 00:22:23.124 [2024-07-24 21:47:31.059337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.124 [2024-07-24 21:47:31.059343] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x77dec0) 00:22:23.124 [2024-07-24 21:47:31.059354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.124 [2024-07-24 21:47:31.059359] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059362] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059365] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x77dec0) 00:22:23.124 [2024-07-24 21:47:31.059370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.124 [2024-07-24 21:47:31.059375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059381] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.124 [2024-07-24 21:47:31.059386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.124 [2024-07-24 21:47:31.059391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:23.124 [2024-07-24 21:47:31.059402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:23.124 [2024-07-24 21:47:31.059409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059412] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x77dec0) 00:22:23.124 [2024-07-24 21:47:31.059418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.124 [2024-07-24 21:47:31.059431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x800e40, cid 0, qid 0 00:22:23.124 [2024-07-24 21:47:31.059436] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x800fc0, cid 1, qid 0 00:22:23.124 [2024-07-24 21:47:31.059440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x801140, cid 2, qid 0 00:22:23.124 [2024-07-24 21:47:31.059444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.124 [2024-07-24 21:47:31.059450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x801440, cid 4, qid 0 00:22:23.124 [2024-07-24 21:47:31.059633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.124 [2024-07-24 21:47:31.059643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.124 [2024-07-24 21:47:31.059646] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x801440) on tqpair=0x77dec0 00:22:23.124 [2024-07-24 21:47:31.059655] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:23.124 [2024-07-24 21:47:31.059660] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:23.124 [2024-07-24 21:47:31.059672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059676] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x77dec0) 00:22:23.124 [2024-07-24 21:47:31.059683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.124 [2024-07-24 21:47:31.059695] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x801440, cid 4, qid 0 00:22:23.124 [2024-07-24 21:47:31.059878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.124 [2024-07-24 21:47:31.059888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.124 [2024-07-24 21:47:31.059891] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059894] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x77dec0): datao=0, datal=4096, cccid=4 00:22:23.124 [2024-07-24 21:47:31.059898] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x801440) on tqpair(0x77dec0): expected_datao=0, payload_size=4096 00:22:23.124 [2024-07-24 21:47:31.059902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059908] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.059912] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.060003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.124 [2024-07-24 21:47:31.060012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.124 [2024-07-24 21:47:31.060015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.060019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x801440) on tqpair=0x77dec0 00:22:23.124 [2024-07-24 21:47:31.060032] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:23.124 [2024-07-24 21:47:31.060067] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.060072] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x77dec0) 00:22:23.124 [2024-07-24 21:47:31.060078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.124 [2024-07-24 21:47:31.060084] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.060088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.124 [2024-07-24 21:47:31.060091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x77dec0) 00:22:23.124 [2024-07-24 21:47:31.060096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.124 [2024-07-24 21:47:31.060113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x801440, cid 4, qid 0 00:22:23.124 [2024-07-24 21:47:31.060117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8015c0, cid 5, qid 0 00:22:23.124 [2024-07-24 21:47:31.060316] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.125 [2024-07-24 21:47:31.060326] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.125 [2024-07-24 21:47:31.060332] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.060335] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x77dec0): datao=0, datal=1024, cccid=4 00:22:23.125 [2024-07-24 21:47:31.060339] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x801440) on tqpair(0x77dec0): expected_datao=0, payload_size=1024 00:22:23.125 [2024-07-24 21:47:31.060343] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.060349] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.060352] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.060357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.125 [2024-07-24 21:47:31.060362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.125 [2024-07-24 21:47:31.060365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.060368] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8015c0) on tqpair=0x77dec0 00:22:23.125 [2024-07-24 21:47:31.101294] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.125 [2024-07-24 21:47:31.101309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.125 [2024-07-24 21:47:31.101313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.101317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x801440) on tqpair=0x77dec0 00:22:23.125 [2024-07-24 21:47:31.101336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.101340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x77dec0) 00:22:23.125 [2024-07-24 21:47:31.101348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.125 [2024-07-24 21:47:31.101366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x801440, cid 4, qid 0 00:22:23.125 [2024-07-24 21:47:31.101524] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.125 [2024-07-24 21:47:31.101535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.125 [2024-07-24 21:47:31.101538] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.101541] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x77dec0): datao=0, datal=3072, cccid=4 00:22:23.125 [2024-07-24 21:47:31.101545] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x801440) on tqpair(0x77dec0): expected_datao=0, payload_size=3072 00:22:23.125 [2024-07-24 21:47:31.101549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.101800] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.101804] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.147058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.125 [2024-07-24 21:47:31.147067] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.125 [2024-07-24 21:47:31.147070] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.147074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x801440) on tqpair=0x77dec0 00:22:23.125 [2024-07-24 21:47:31.147083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.147086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x77dec0) 00:22:23.125 [2024-07-24 21:47:31.147093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.125 [2024-07-24 21:47:31.147109] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x801440, cid 4, qid 0 00:22:23.125 [2024-07-24 21:47:31.147294] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.125 [2024-07-24 21:47:31.147304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.125 [2024-07-24 21:47:31.147307] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.147313] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x77dec0): datao=0, datal=8, cccid=4 00:22:23.125 [2024-07-24 21:47:31.147317] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x801440) on tqpair(0x77dec0): expected_datao=0, payload_size=8 00:22:23.125 [2024-07-24 21:47:31.147321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.147327] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.147330] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.192057] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.125 [2024-07-24 21:47:31.192068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.125 [2024-07-24 21:47:31.192071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.125 [2024-07-24 21:47:31.192075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x801440) on tqpair=0x77dec0 00:22:23.125 ===================================================== 00:22:23.125 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:23.125 ===================================================== 00:22:23.125 Controller Capabilities/Features 00:22:23.125 ================================ 00:22:23.125 Vendor ID: 0000 00:22:23.125 Subsystem Vendor ID: 0000 00:22:23.125 Serial Number: .................... 00:22:23.125 Model Number: ........................................ 00:22:23.125 Firmware Version: 24.09 00:22:23.125 Recommended Arb Burst: 0 00:22:23.125 IEEE OUI Identifier: 00 00 00 00:22:23.125 Multi-path I/O 00:22:23.125 May have multiple subsystem ports: No 00:22:23.125 May have multiple controllers: No 00:22:23.125 Associated with SR-IOV VF: No 00:22:23.125 Max Data Transfer Size: 131072 00:22:23.125 Max Number of Namespaces: 0 00:22:23.125 Max Number of I/O Queues: 1024 00:22:23.125 NVMe Specification Version (VS): 1.3 00:22:23.125 NVMe Specification Version (Identify): 1.3 00:22:23.125 Maximum Queue Entries: 128 00:22:23.125 Contiguous Queues Required: Yes 00:22:23.125 Arbitration Mechanisms Supported 00:22:23.125 Weighted Round Robin: Not Supported 00:22:23.125 Vendor Specific: Not Supported 00:22:23.125 Reset Timeout: 15000 ms 00:22:23.125 Doorbell Stride: 4 bytes 00:22:23.125 NVM Subsystem Reset: Not Supported 00:22:23.125 Command Sets Supported 00:22:23.125 NVM Command Set: Supported 00:22:23.125 Boot Partition: Not Supported 00:22:23.125 Memory Page Size Minimum: 4096 bytes 00:22:23.125 Memory Page Size Maximum: 4096 bytes 00:22:23.125 Persistent Memory Region: Not Supported 00:22:23.125 Optional Asynchronous Events Supported 00:22:23.125 Namespace Attribute Notices: Not Supported 00:22:23.125 Firmware Activation Notices: Not Supported 00:22:23.125 ANA Change Notices: Not Supported 00:22:23.125 PLE Aggregate Log Change Notices: Not Supported 00:22:23.125 LBA Status Info Alert Notices: Not Supported 00:22:23.125 EGE Aggregate Log Change Notices: Not Supported 00:22:23.125 Normal NVM Subsystem Shutdown event: Not Supported 00:22:23.125 Zone Descriptor Change Notices: Not Supported 00:22:23.125 Discovery Log Change Notices: Supported 00:22:23.125 Controller Attributes 00:22:23.125 128-bit Host Identifier: Not Supported 00:22:23.125 Non-Operational Permissive Mode: Not Supported 00:22:23.125 NVM Sets: Not Supported 00:22:23.125 Read Recovery Levels: Not Supported 00:22:23.125 Endurance Groups: Not Supported 00:22:23.125 Predictable Latency Mode: Not Supported 00:22:23.125 Traffic Based Keep ALive: Not Supported 00:22:23.125 Namespace Granularity: Not Supported 00:22:23.125 SQ Associations: Not Supported 00:22:23.125 UUID List: Not Supported 00:22:23.125 Multi-Domain Subsystem: Not Supported 00:22:23.125 Fixed Capacity Management: Not Supported 00:22:23.125 Variable Capacity Management: Not Supported 00:22:23.125 Delete Endurance Group: Not Supported 00:22:23.125 Delete NVM Set: Not Supported 00:22:23.125 Extended LBA Formats Supported: Not Supported 00:22:23.125 Flexible Data Placement Supported: Not Supported 00:22:23.125 00:22:23.125 Controller Memory Buffer Support 00:22:23.125 ================================ 00:22:23.125 Supported: No 00:22:23.125 00:22:23.125 Persistent Memory Region Support 00:22:23.125 ================================ 00:22:23.125 Supported: No 00:22:23.125 00:22:23.125 Admin Command Set Attributes 00:22:23.125 ============================ 00:22:23.125 Security Send/Receive: Not Supported 00:22:23.125 Format NVM: Not Supported 00:22:23.125 Firmware Activate/Download: Not Supported 00:22:23.125 Namespace Management: Not Supported 00:22:23.125 Device Self-Test: Not Supported 00:22:23.125 Directives: Not Supported 00:22:23.125 NVMe-MI: Not Supported 00:22:23.125 Virtualization Management: Not Supported 00:22:23.125 Doorbell Buffer Config: Not Supported 00:22:23.125 Get LBA Status Capability: Not Supported 00:22:23.125 Command & Feature Lockdown Capability: Not Supported 00:22:23.125 Abort Command Limit: 1 00:22:23.125 Async Event Request Limit: 4 00:22:23.125 Number of Firmware Slots: N/A 00:22:23.125 Firmware Slot 1 Read-Only: N/A 00:22:23.125 Firmware Activation Without Reset: N/A 00:22:23.125 Multiple Update Detection Support: N/A 00:22:23.125 Firmware Update Granularity: No Information Provided 00:22:23.125 Per-Namespace SMART Log: No 00:22:23.125 Asymmetric Namespace Access Log Page: Not Supported 00:22:23.125 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:23.125 Command Effects Log Page: Not Supported 00:22:23.125 Get Log Page Extended Data: Supported 00:22:23.125 Telemetry Log Pages: Not Supported 00:22:23.125 Persistent Event Log Pages: Not Supported 00:22:23.125 Supported Log Pages Log Page: May Support 00:22:23.125 Commands Supported & Effects Log Page: Not Supported 00:22:23.125 Feature Identifiers & Effects Log Page:May Support 00:22:23.125 NVMe-MI Commands & Effects Log Page: May Support 00:22:23.126 Data Area 4 for Telemetry Log: Not Supported 00:22:23.126 Error Log Page Entries Supported: 128 00:22:23.126 Keep Alive: Not Supported 00:22:23.126 00:22:23.126 NVM Command Set Attributes 00:22:23.126 ========================== 00:22:23.126 Submission Queue Entry Size 00:22:23.126 Max: 1 00:22:23.126 Min: 1 00:22:23.126 Completion Queue Entry Size 00:22:23.126 Max: 1 00:22:23.126 Min: 1 00:22:23.126 Number of Namespaces: 0 00:22:23.126 Compare Command: Not Supported 00:22:23.126 Write Uncorrectable Command: Not Supported 00:22:23.126 Dataset Management Command: Not Supported 00:22:23.126 Write Zeroes Command: Not Supported 00:22:23.126 Set Features Save Field: Not Supported 00:22:23.126 Reservations: Not Supported 00:22:23.126 Timestamp: Not Supported 00:22:23.126 Copy: Not Supported 00:22:23.126 Volatile Write Cache: Not Present 00:22:23.126 Atomic Write Unit (Normal): 1 00:22:23.126 Atomic Write Unit (PFail): 1 00:22:23.126 Atomic Compare & Write Unit: 1 00:22:23.126 Fused Compare & Write: Supported 00:22:23.126 Scatter-Gather List 00:22:23.126 SGL Command Set: Supported 00:22:23.126 SGL Keyed: Supported 00:22:23.126 SGL Bit Bucket Descriptor: Not Supported 00:22:23.126 SGL Metadata Pointer: Not Supported 00:22:23.126 Oversized SGL: Not Supported 00:22:23.126 SGL Metadata Address: Not Supported 00:22:23.126 SGL Offset: Supported 00:22:23.126 Transport SGL Data Block: Not Supported 00:22:23.126 Replay Protected Memory Block: Not Supported 00:22:23.126 00:22:23.126 Firmware Slot Information 00:22:23.126 ========================= 00:22:23.126 Active slot: 0 00:22:23.126 00:22:23.126 00:22:23.126 Error Log 00:22:23.126 ========= 00:22:23.126 00:22:23.126 Active Namespaces 00:22:23.126 ================= 00:22:23.126 Discovery Log Page 00:22:23.126 ================== 00:22:23.126 Generation Counter: 2 00:22:23.126 Number of Records: 2 00:22:23.126 Record Format: 0 00:22:23.126 00:22:23.126 Discovery Log Entry 0 00:22:23.126 ---------------------- 00:22:23.126 Transport Type: 3 (TCP) 00:22:23.126 Address Family: 1 (IPv4) 00:22:23.126 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:23.126 Entry Flags: 00:22:23.126 Duplicate Returned Information: 1 00:22:23.126 Explicit Persistent Connection Support for Discovery: 1 00:22:23.126 Transport Requirements: 00:22:23.126 Secure Channel: Not Required 00:22:23.126 Port ID: 0 (0x0000) 00:22:23.126 Controller ID: 65535 (0xffff) 00:22:23.126 Admin Max SQ Size: 128 00:22:23.126 Transport Service Identifier: 4420 00:22:23.126 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:23.126 Transport Address: 10.0.0.2 00:22:23.126 Discovery Log Entry 1 00:22:23.126 ---------------------- 00:22:23.126 Transport Type: 3 (TCP) 00:22:23.126 Address Family: 1 (IPv4) 00:22:23.126 Subsystem Type: 2 (NVM Subsystem) 00:22:23.126 Entry Flags: 00:22:23.126 Duplicate Returned Information: 0 00:22:23.126 Explicit Persistent Connection Support for Discovery: 0 00:22:23.126 Transport Requirements: 00:22:23.126 Secure Channel: Not Required 00:22:23.126 Port ID: 0 (0x0000) 00:22:23.126 Controller ID: 65535 (0xffff) 00:22:23.126 Admin Max SQ Size: 128 00:22:23.126 Transport Service Identifier: 4420 00:22:23.126 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:23.126 Transport Address: 10.0.0.2 [2024-07-24 21:47:31.192149] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:23.126 [2024-07-24 21:47:31.192160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x800e40) on tqpair=0x77dec0 00:22:23.126 [2024-07-24 21:47:31.192166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.126 [2024-07-24 21:47:31.192171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x800fc0) on tqpair=0x77dec0 00:22:23.126 [2024-07-24 21:47:31.192175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.126 [2024-07-24 21:47:31.192179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x801140) on tqpair=0x77dec0 00:22:23.126 [2024-07-24 21:47:31.192183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.126 [2024-07-24 21:47:31.192187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.126 [2024-07-24 21:47:31.192191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.126 [2024-07-24 21:47:31.192201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.126 [2024-07-24 21:47:31.192204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.126 [2024-07-24 21:47:31.192208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.126 [2024-07-24 21:47:31.192214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.126 [2024-07-24 21:47:31.192227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.126 [2024-07-24 21:47:31.192426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.126 [2024-07-24 21:47:31.192436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.126 [2024-07-24 21:47:31.192439] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.126 [2024-07-24 21:47:31.192442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.126 [2024-07-24 21:47:31.192450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.126 [2024-07-24 21:47:31.192454] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.126 [2024-07-24 21:47:31.192457] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.126 [2024-07-24 21:47:31.192463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.126 [2024-07-24 21:47:31.192480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.126 [2024-07-24 21:47:31.192663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.126 [2024-07-24 21:47:31.192672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.126 [2024-07-24 21:47:31.192676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.126 [2024-07-24 21:47:31.192682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.126 [2024-07-24 21:47:31.192687] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:23.126 [2024-07-24 21:47:31.192691] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:23.126 [2024-07-24 21:47:31.192701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.126 [2024-07-24 21:47:31.192705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.126 [2024-07-24 21:47:31.192708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.126 [2024-07-24 21:47:31.192714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.126 [2024-07-24 21:47:31.192727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.126 [2024-07-24 21:47:31.192901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.126 [2024-07-24 21:47:31.192911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.126 [2024-07-24 21:47:31.192914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.126 [2024-07-24 21:47:31.192917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.126 [2024-07-24 21:47:31.192929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.126 [2024-07-24 21:47:31.192933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.126 [2024-07-24 21:47:31.192936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.126 [2024-07-24 21:47:31.192942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.126 [2024-07-24 21:47:31.192954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.126 [2024-07-24 21:47:31.193108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.126 [2024-07-24 21:47:31.193118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.126 [2024-07-24 21:47:31.193121] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.126 [2024-07-24 21:47:31.193125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.126 [2024-07-24 21:47:31.193136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.126 [2024-07-24 21:47:31.193139] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.126 [2024-07-24 21:47:31.193142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.126 [2024-07-24 21:47:31.193149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.126 [2024-07-24 21:47:31.193161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.126 [2024-07-24 21:47:31.193345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.126 [2024-07-24 21:47:31.193354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.127 [2024-07-24 21:47:31.193357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.193361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.127 [2024-07-24 21:47:31.193372] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.193375] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.193379] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.127 [2024-07-24 21:47:31.193385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.127 [2024-07-24 21:47:31.193397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.127 [2024-07-24 21:47:31.193582] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.127 [2024-07-24 21:47:31.193595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.127 [2024-07-24 21:47:31.193599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.193602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.127 [2024-07-24 21:47:31.193613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.193617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.193620] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.127 [2024-07-24 21:47:31.193626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.127 [2024-07-24 21:47:31.193639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.127 [2024-07-24 21:47:31.193834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.127 [2024-07-24 21:47:31.193843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.127 [2024-07-24 21:47:31.193846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.193850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.127 [2024-07-24 21:47:31.193861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.193864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.193867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.127 [2024-07-24 21:47:31.193874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.127 [2024-07-24 21:47:31.193886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.127 [2024-07-24 21:47:31.194036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.127 [2024-07-24 21:47:31.194053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.127 [2024-07-24 21:47:31.194056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.127 [2024-07-24 21:47:31.194071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.127 [2024-07-24 21:47:31.194084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.127 [2024-07-24 21:47:31.194096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.127 [2024-07-24 21:47:31.194274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.127 [2024-07-24 21:47:31.194284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.127 [2024-07-24 21:47:31.194287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.127 [2024-07-24 21:47:31.194301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194305] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.127 [2024-07-24 21:47:31.194314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.127 [2024-07-24 21:47:31.194326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.127 [2024-07-24 21:47:31.194474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.127 [2024-07-24 21:47:31.194483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.127 [2024-07-24 21:47:31.194489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.127 [2024-07-24 21:47:31.194503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.127 [2024-07-24 21:47:31.194516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.127 [2024-07-24 21:47:31.194528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.127 [2024-07-24 21:47:31.194688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.127 [2024-07-24 21:47:31.194697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.127 [2024-07-24 21:47:31.194700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.127 [2024-07-24 21:47:31.194715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194721] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.127 [2024-07-24 21:47:31.194728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.127 [2024-07-24 21:47:31.194739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.127 [2024-07-24 21:47:31.194886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.127 [2024-07-24 21:47:31.194896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.127 [2024-07-24 21:47:31.194899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194902] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.127 [2024-07-24 21:47:31.194913] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.194920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.127 [2024-07-24 21:47:31.194926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.127 [2024-07-24 21:47:31.194938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.127 [2024-07-24 21:47:31.195135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.127 [2024-07-24 21:47:31.195145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.127 [2024-07-24 21:47:31.195149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.195152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.127 [2024-07-24 21:47:31.195163] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.195167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.127 [2024-07-24 21:47:31.195170] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.127 [2024-07-24 21:47:31.195176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.127 [2024-07-24 21:47:31.195188] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.127 [2024-07-24 21:47:31.195373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.128 [2024-07-24 21:47:31.195382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.128 [2024-07-24 21:47:31.195385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.128 [2024-07-24 21:47:31.195388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.128 [2024-07-24 21:47:31.195402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.128 [2024-07-24 21:47:31.195406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.128 [2024-07-24 21:47:31.195409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.128 [2024-07-24 21:47:31.195415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.128 [2024-07-24 21:47:31.195427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.128 [2024-07-24 21:47:31.195574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.128 [2024-07-24 21:47:31.195583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.128 [2024-07-24 21:47:31.195586] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.128 [2024-07-24 21:47:31.195590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.128 [2024-07-24 21:47:31.195600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.128 [2024-07-24 21:47:31.195604] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.128 [2024-07-24 21:47:31.195607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.128 [2024-07-24 21:47:31.195613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.128 [2024-07-24 21:47:31.195625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.128 [2024-07-24 21:47:31.195771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.128 [2024-07-24 21:47:31.195781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.128 [2024-07-24 21:47:31.195784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.128 [2024-07-24 21:47:31.195788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.128 [2024-07-24 21:47:31.195798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.128 [2024-07-24 21:47:31.195802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.128 [2024-07-24 21:47:31.195805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.128 [2024-07-24 21:47:31.195811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.128 [2024-07-24 21:47:31.195823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.128 [2024-07-24 21:47:31.196013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.128 [2024-07-24 21:47:31.196022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.128 [2024-07-24 21:47:31.196025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.128 [2024-07-24 21:47:31.196028] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.128 [2024-07-24 21:47:31.196039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.128 [2024-07-24 21:47:31.200096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.128 [2024-07-24 21:47:31.200100] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x77dec0) 00:22:23.128 [2024-07-24 21:47:31.200108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.128 [2024-07-24 21:47:31.200122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8012c0, cid 3, qid 0 00:22:23.128 [2024-07-24 21:47:31.200466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.128 [2024-07-24 21:47:31.200471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.128 [2024-07-24 21:47:31.200475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.128 [2024-07-24 21:47:31.200478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8012c0) on tqpair=0x77dec0 00:22:23.128 [2024-07-24 21:47:31.200486] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:22:23.128 00:22:23.128 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:23.390 [2024-07-24 21:47:31.242704] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:22:23.390 [2024-07-24 21:47:31.242740] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3136559 ] 00:22:23.390 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.390 [2024-07-24 21:47:31.272298] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:23.390 [2024-07-24 21:47:31.272339] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:23.390 [2024-07-24 21:47:31.272343] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:23.390 [2024-07-24 21:47:31.272354] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:23.390 [2024-07-24 21:47:31.272362] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:23.390 [2024-07-24 21:47:31.272893] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:23.390 [2024-07-24 21:47:31.272913] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x166eec0 0 00:22:23.390 [2024-07-24 21:47:31.287052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:23.390 [2024-07-24 21:47:31.287071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:23.390 [2024-07-24 21:47:31.287075] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:23.390 [2024-07-24 21:47:31.287078] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:23.390 [2024-07-24 21:47:31.287109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.287115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.287118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166eec0) 00:22:23.390 [2024-07-24 21:47:31.287129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:23.390 [2024-07-24 21:47:31.287144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f1e40, cid 0, qid 0 00:22:23.390 [2024-07-24 21:47:31.295054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.390 [2024-07-24 21:47:31.295062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.390 [2024-07-24 21:47:31.295065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.295069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f1e40) on tqpair=0x166eec0 00:22:23.390 [2024-07-24 21:47:31.295079] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:23.390 [2024-07-24 21:47:31.295085] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:23.390 [2024-07-24 21:47:31.295089] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:23.390 [2024-07-24 21:47:31.295100] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.295103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.295106] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166eec0) 00:22:23.390 [2024-07-24 21:47:31.295113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.390 [2024-07-24 21:47:31.295128] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f1e40, cid 0, qid 0 00:22:23.390 [2024-07-24 21:47:31.295365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.390 [2024-07-24 21:47:31.295376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.390 [2024-07-24 21:47:31.295379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.295382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f1e40) on tqpair=0x166eec0 00:22:23.390 [2024-07-24 21:47:31.295390] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:23.390 [2024-07-24 21:47:31.295398] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:23.390 [2024-07-24 21:47:31.295406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.295409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.295412] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166eec0) 00:22:23.390 [2024-07-24 21:47:31.295419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.390 [2024-07-24 21:47:31.295431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f1e40, cid 0, qid 0 00:22:23.390 [2024-07-24 21:47:31.295574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.390 [2024-07-24 21:47:31.295584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.390 [2024-07-24 21:47:31.295587] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.295590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f1e40) on tqpair=0x166eec0 00:22:23.390 [2024-07-24 21:47:31.295595] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:23.390 [2024-07-24 21:47:31.295603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:23.390 [2024-07-24 21:47:31.295610] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.295614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.295617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166eec0) 00:22:23.390 [2024-07-24 21:47:31.295623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.390 [2024-07-24 21:47:31.295636] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f1e40, cid 0, qid 0 00:22:23.390 [2024-07-24 21:47:31.295777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.390 [2024-07-24 21:47:31.295786] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.390 [2024-07-24 21:47:31.295789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.295793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f1e40) on tqpair=0x166eec0 00:22:23.390 [2024-07-24 21:47:31.295798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:23.390 [2024-07-24 21:47:31.295808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.295812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.295815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166eec0) 00:22:23.390 [2024-07-24 21:47:31.295821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.390 [2024-07-24 21:47:31.295833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f1e40, cid 0, qid 0 00:22:23.390 [2024-07-24 21:47:31.295981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.390 [2024-07-24 21:47:31.295993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.390 [2024-07-24 21:47:31.295997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.296000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f1e40) on tqpair=0x166eec0 00:22:23.390 [2024-07-24 21:47:31.296004] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:23.390 [2024-07-24 21:47:31.296009] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:23.390 [2024-07-24 21:47:31.296017] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:23.390 [2024-07-24 21:47:31.296122] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:23.390 [2024-07-24 21:47:31.296126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:23.390 [2024-07-24 21:47:31.296133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.296137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.296140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166eec0) 00:22:23.390 [2024-07-24 21:47:31.296146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.390 [2024-07-24 21:47:31.296159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f1e40, cid 0, qid 0 00:22:23.390 [2024-07-24 21:47:31.296323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.390 [2024-07-24 21:47:31.296333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.390 [2024-07-24 21:47:31.296336] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.296339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f1e40) on tqpair=0x166eec0 00:22:23.390 [2024-07-24 21:47:31.296344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:23.390 [2024-07-24 21:47:31.296355] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.296358] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.296361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166eec0) 00:22:23.390 [2024-07-24 21:47:31.296368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.390 [2024-07-24 21:47:31.296380] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f1e40, cid 0, qid 0 00:22:23.390 [2024-07-24 21:47:31.296538] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.390 [2024-07-24 21:47:31.296548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.390 [2024-07-24 21:47:31.296551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.390 [2024-07-24 21:47:31.296554] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f1e40) on tqpair=0x166eec0 00:22:23.390 [2024-07-24 21:47:31.296559] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:23.390 [2024-07-24 21:47:31.296563] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:23.390 [2024-07-24 21:47:31.296571] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:23.391 [2024-07-24 21:47:31.296579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.296587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.296593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166eec0) 00:22:23.391 [2024-07-24 21:47:31.296600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.391 [2024-07-24 21:47:31.296611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f1e40, cid 0, qid 0 00:22:23.391 [2024-07-24 21:47:31.296784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.391 [2024-07-24 21:47:31.296794] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.391 [2024-07-24 21:47:31.296797] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.296801] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x166eec0): datao=0, datal=4096, cccid=0 00:22:23.391 [2024-07-24 21:47:31.296804] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16f1e40) on tqpair(0x166eec0): expected_datao=0, payload_size=4096 00:22:23.391 [2024-07-24 21:47:31.296808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.297052] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.297056] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.391 [2024-07-24 21:47:31.338301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.391 [2024-07-24 21:47:31.338304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338308] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f1e40) on tqpair=0x166eec0 00:22:23.391 [2024-07-24 21:47:31.338315] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:23.391 [2024-07-24 21:47:31.338319] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:23.391 [2024-07-24 21:47:31.338323] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:23.391 [2024-07-24 21:47:31.338327] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:23.391 [2024-07-24 21:47:31.338331] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:23.391 [2024-07-24 21:47:31.338335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.338344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.338355] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166eec0) 00:22:23.391 [2024-07-24 21:47:31.338369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:23.391 [2024-07-24 21:47:31.338382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f1e40, cid 0, qid 0 00:22:23.391 [2024-07-24 21:47:31.338528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.391 [2024-07-24 21:47:31.338537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.391 [2024-07-24 21:47:31.338540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338544] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f1e40) on tqpair=0x166eec0 00:22:23.391 [2024-07-24 21:47:31.338551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166eec0) 00:22:23.391 [2024-07-24 21:47:31.338563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.391 [2024-07-24 21:47:31.338571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x166eec0) 00:22:23.391 [2024-07-24 21:47:31.338583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.391 [2024-07-24 21:47:31.338588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x166eec0) 00:22:23.391 [2024-07-24 21:47:31.338599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.391 [2024-07-24 21:47:31.338604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.391 [2024-07-24 21:47:31.338616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.391 [2024-07-24 21:47:31.338620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.338631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.338637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x166eec0) 00:22:23.391 [2024-07-24 21:47:31.338646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.391 [2024-07-24 21:47:31.338659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f1e40, cid 0, qid 0 00:22:23.391 [2024-07-24 21:47:31.338664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f1fc0, cid 1, qid 0 00:22:23.391 [2024-07-24 21:47:31.338668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f2140, cid 2, qid 0 00:22:23.391 [2024-07-24 21:47:31.338672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.391 [2024-07-24 21:47:31.338676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f2440, cid 4, qid 0 00:22:23.391 [2024-07-24 21:47:31.338858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.391 [2024-07-24 21:47:31.338868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.391 [2024-07-24 21:47:31.338871] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f2440) on tqpair=0x166eec0 00:22:23.391 [2024-07-24 21:47:31.338879] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:23.391 [2024-07-24 21:47:31.338883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.338895] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.338901] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.338907] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338911] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.338916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x166eec0) 00:22:23.391 [2024-07-24 21:47:31.338922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:23.391 [2024-07-24 21:47:31.338935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f2440, cid 4, qid 0 00:22:23.391 [2024-07-24 21:47:31.343051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.391 [2024-07-24 21:47:31.343063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.391 [2024-07-24 21:47:31.343067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.343070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f2440) on tqpair=0x166eec0 00:22:23.391 [2024-07-24 21:47:31.343126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.343137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.343145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.343148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x166eec0) 00:22:23.391 [2024-07-24 21:47:31.343155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.391 [2024-07-24 21:47:31.343168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f2440, cid 4, qid 0 00:22:23.391 [2024-07-24 21:47:31.343497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.391 [2024-07-24 21:47:31.343508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.391 [2024-07-24 21:47:31.343511] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.343515] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x166eec0): datao=0, datal=4096, cccid=4 00:22:23.391 [2024-07-24 21:47:31.343519] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16f2440) on tqpair(0x166eec0): expected_datao=0, payload_size=4096 00:22:23.391 [2024-07-24 21:47:31.343523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.343529] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.343532] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.343802] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.391 [2024-07-24 21:47:31.343807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.391 [2024-07-24 21:47:31.343810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.343813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f2440) on tqpair=0x166eec0 00:22:23.391 [2024-07-24 21:47:31.343822] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:23.391 [2024-07-24 21:47:31.343832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.343842] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.343849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.343852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x166eec0) 00:22:23.391 [2024-07-24 21:47:31.343859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.391 [2024-07-24 21:47:31.343871] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f2440, cid 4, qid 0 00:22:23.391 [2024-07-24 21:47:31.344037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.391 [2024-07-24 21:47:31.344059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.391 [2024-07-24 21:47:31.344063] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.344066] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x166eec0): datao=0, datal=4096, cccid=4 00:22:23.391 [2024-07-24 21:47:31.344070] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16f2440) on tqpair(0x166eec0): expected_datao=0, payload_size=4096 00:22:23.391 [2024-07-24 21:47:31.344074] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.344079] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.344083] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.344340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.391 [2024-07-24 21:47:31.344346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.391 [2024-07-24 21:47:31.344348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.344352] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f2440) on tqpair=0x166eec0 00:22:23.391 [2024-07-24 21:47:31.344365] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.344374] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.344382] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.344385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x166eec0) 00:22:23.391 [2024-07-24 21:47:31.344391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.391 [2024-07-24 21:47:31.344403] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f2440, cid 4, qid 0 00:22:23.391 [2024-07-24 21:47:31.344573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.391 [2024-07-24 21:47:31.344584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.391 [2024-07-24 21:47:31.344587] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.344591] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x166eec0): datao=0, datal=4096, cccid=4 00:22:23.391 [2024-07-24 21:47:31.344594] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16f2440) on tqpair(0x166eec0): expected_datao=0, payload_size=4096 00:22:23.391 [2024-07-24 21:47:31.344598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.344604] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.344608] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.344860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.391 [2024-07-24 21:47:31.344865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.391 [2024-07-24 21:47:31.344868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.391 [2024-07-24 21:47:31.344872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f2440) on tqpair=0x166eec0 00:22:23.391 [2024-07-24 21:47:31.344880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.344887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.344896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.344904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:23.391 [2024-07-24 21:47:31.344908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:23.392 [2024-07-24 21:47:31.344915] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:23.392 [2024-07-24 21:47:31.344919] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:23.392 [2024-07-24 21:47:31.344923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:23.392 [2024-07-24 21:47:31.344927] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:23.392 [2024-07-24 21:47:31.344940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.344944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x166eec0) 00:22:23.392 [2024-07-24 21:47:31.344950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.392 [2024-07-24 21:47:31.344956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.344959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.344962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x166eec0) 00:22:23.392 [2024-07-24 21:47:31.344967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.392 [2024-07-24 21:47:31.344981] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f2440, cid 4, qid 0 00:22:23.392 [2024-07-24 21:47:31.344986] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f25c0, cid 5, qid 0 00:22:23.392 [2024-07-24 21:47:31.345154] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.392 [2024-07-24 21:47:31.345165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.392 [2024-07-24 21:47:31.345168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.345171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f2440) on tqpair=0x166eec0 00:22:23.392 [2024-07-24 21:47:31.345178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.392 [2024-07-24 21:47:31.345182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.392 [2024-07-24 21:47:31.345185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.345189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f25c0) on tqpair=0x166eec0 00:22:23.392 [2024-07-24 21:47:31.345199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.345202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x166eec0) 00:22:23.392 [2024-07-24 21:47:31.345209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.392 [2024-07-24 21:47:31.345221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f25c0, cid 5, qid 0 00:22:23.392 [2024-07-24 21:47:31.345364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.392 [2024-07-24 21:47:31.345374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.392 [2024-07-24 21:47:31.345377] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.345380] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f25c0) on tqpair=0x166eec0 00:22:23.392 [2024-07-24 21:47:31.345391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.345395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x166eec0) 00:22:23.392 [2024-07-24 21:47:31.345401] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.392 [2024-07-24 21:47:31.345412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f25c0, cid 5, qid 0 00:22:23.392 [2024-07-24 21:47:31.345562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.392 [2024-07-24 21:47:31.345572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.392 [2024-07-24 21:47:31.345576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.345579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f25c0) on tqpair=0x166eec0 00:22:23.392 [2024-07-24 21:47:31.345589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.345593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x166eec0) 00:22:23.392 [2024-07-24 21:47:31.345599] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.392 [2024-07-24 21:47:31.345610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f25c0, cid 5, qid 0 00:22:23.392 [2024-07-24 21:47:31.345760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.392 [2024-07-24 21:47:31.345769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.392 [2024-07-24 21:47:31.345772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.345776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f25c0) on tqpair=0x166eec0 00:22:23.392 [2024-07-24 21:47:31.345793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.345797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x166eec0) 00:22:23.392 [2024-07-24 21:47:31.345803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.392 [2024-07-24 21:47:31.345809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.345812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x166eec0) 00:22:23.392 [2024-07-24 21:47:31.345817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.392 [2024-07-24 21:47:31.345824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.345827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x166eec0) 00:22:23.392 [2024-07-24 21:47:31.345832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.392 [2024-07-24 21:47:31.345838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.345841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x166eec0) 00:22:23.392 [2024-07-24 21:47:31.345846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.392 [2024-07-24 21:47:31.345859] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f25c0, cid 5, qid 0 00:22:23.392 [2024-07-24 21:47:31.345864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f2440, cid 4, qid 0 00:22:23.392 [2024-07-24 21:47:31.345868] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f2740, cid 6, qid 0 00:22:23.392 [2024-07-24 21:47:31.345872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f28c0, cid 7, qid 0 00:22:23.392 [2024-07-24 21:47:31.346098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.392 [2024-07-24 21:47:31.346109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.392 [2024-07-24 21:47:31.346112] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346115] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x166eec0): datao=0, datal=8192, cccid=5 00:22:23.392 [2024-07-24 21:47:31.346120] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16f25c0) on tqpair(0x166eec0): expected_datao=0, payload_size=8192 00:22:23.392 [2024-07-24 21:47:31.346123] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346654] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346658] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.392 [2024-07-24 21:47:31.346668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.392 [2024-07-24 21:47:31.346671] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346674] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x166eec0): datao=0, datal=512, cccid=4 00:22:23.392 [2024-07-24 21:47:31.346678] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16f2440) on tqpair(0x166eec0): expected_datao=0, payload_size=512 00:22:23.392 [2024-07-24 21:47:31.346681] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346687] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346690] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.392 [2024-07-24 21:47:31.346699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.392 [2024-07-24 21:47:31.346702] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346705] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x166eec0): datao=0, datal=512, cccid=6 00:22:23.392 [2024-07-24 21:47:31.346709] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16f2740) on tqpair(0x166eec0): expected_datao=0, payload_size=512 00:22:23.392 [2024-07-24 21:47:31.346712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346718] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346721] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.392 [2024-07-24 21:47:31.346730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.392 [2024-07-24 21:47:31.346733] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346736] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x166eec0): datao=0, datal=4096, cccid=7 00:22:23.392 [2024-07-24 21:47:31.346739] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16f28c0) on tqpair(0x166eec0): expected_datao=0, payload_size=4096 00:22:23.392 [2024-07-24 21:47:31.346743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346749] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346751] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.392 [2024-07-24 21:47:31.346964] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.392 [2024-07-24 21:47:31.346967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f25c0) on tqpair=0x166eec0 00:22:23.392 [2024-07-24 21:47:31.346981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.392 [2024-07-24 21:47:31.346987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.392 [2024-07-24 21:47:31.346990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.346993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f2440) on tqpair=0x166eec0 00:22:23.392 [2024-07-24 21:47:31.347001] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.392 [2024-07-24 21:47:31.347006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.392 [2024-07-24 21:47:31.347009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.347012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f2740) on tqpair=0x166eec0 00:22:23.392 [2024-07-24 21:47:31.347018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.392 [2024-07-24 21:47:31.347024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.392 [2024-07-24 21:47:31.347027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.392 [2024-07-24 21:47:31.347031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f28c0) on tqpair=0x166eec0 00:22:23.392 ===================================================== 00:22:23.392 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:23.392 ===================================================== 00:22:23.392 Controller Capabilities/Features 00:22:23.392 ================================ 00:22:23.392 Vendor ID: 8086 00:22:23.392 Subsystem Vendor ID: 8086 00:22:23.392 Serial Number: SPDK00000000000001 00:22:23.392 Model Number: SPDK bdev Controller 00:22:23.392 Firmware Version: 24.09 00:22:23.392 Recommended Arb Burst: 6 00:22:23.392 IEEE OUI Identifier: e4 d2 5c 00:22:23.392 Multi-path I/O 00:22:23.392 May have multiple subsystem ports: Yes 00:22:23.392 May have multiple controllers: Yes 00:22:23.392 Associated with SR-IOV VF: No 00:22:23.392 Max Data Transfer Size: 131072 00:22:23.392 Max Number of Namespaces: 32 00:22:23.392 Max Number of I/O Queues: 127 00:22:23.392 NVMe Specification Version (VS): 1.3 00:22:23.392 NVMe Specification Version (Identify): 1.3 00:22:23.392 Maximum Queue Entries: 128 00:22:23.392 Contiguous Queues Required: Yes 00:22:23.392 Arbitration Mechanisms Supported 00:22:23.392 Weighted Round Robin: Not Supported 00:22:23.392 Vendor Specific: Not Supported 00:22:23.392 Reset Timeout: 15000 ms 00:22:23.392 Doorbell Stride: 4 bytes 00:22:23.392 NVM Subsystem Reset: Not Supported 00:22:23.392 Command Sets Supported 00:22:23.392 NVM Command Set: Supported 00:22:23.392 Boot Partition: Not Supported 00:22:23.392 Memory Page Size Minimum: 4096 bytes 00:22:23.392 Memory Page Size Maximum: 4096 bytes 00:22:23.392 Persistent Memory Region: Not Supported 00:22:23.392 Optional Asynchronous Events Supported 00:22:23.392 Namespace Attribute Notices: Supported 00:22:23.392 Firmware Activation Notices: Not Supported 00:22:23.392 ANA Change Notices: Not Supported 00:22:23.392 PLE Aggregate Log Change Notices: Not Supported 00:22:23.392 LBA Status Info Alert Notices: Not Supported 00:22:23.392 EGE Aggregate Log Change Notices: Not Supported 00:22:23.392 Normal NVM Subsystem Shutdown event: Not Supported 00:22:23.392 Zone Descriptor Change Notices: Not Supported 00:22:23.392 Discovery Log Change Notices: Not Supported 00:22:23.392 Controller Attributes 00:22:23.392 128-bit Host Identifier: Supported 00:22:23.392 Non-Operational Permissive Mode: Not Supported 00:22:23.392 NVM Sets: Not Supported 00:22:23.392 Read Recovery Levels: Not Supported 00:22:23.392 Endurance Groups: Not Supported 00:22:23.392 Predictable Latency Mode: Not Supported 00:22:23.392 Traffic Based Keep ALive: Not Supported 00:22:23.392 Namespace Granularity: Not Supported 00:22:23.392 SQ Associations: Not Supported 00:22:23.392 UUID List: Not Supported 00:22:23.392 Multi-Domain Subsystem: Not Supported 00:22:23.392 Fixed Capacity Management: Not Supported 00:22:23.393 Variable Capacity Management: Not Supported 00:22:23.393 Delete Endurance Group: Not Supported 00:22:23.393 Delete NVM Set: Not Supported 00:22:23.393 Extended LBA Formats Supported: Not Supported 00:22:23.393 Flexible Data Placement Supported: Not Supported 00:22:23.393 00:22:23.393 Controller Memory Buffer Support 00:22:23.393 ================================ 00:22:23.393 Supported: No 00:22:23.393 00:22:23.393 Persistent Memory Region Support 00:22:23.393 ================================ 00:22:23.393 Supported: No 00:22:23.393 00:22:23.393 Admin Command Set Attributes 00:22:23.393 ============================ 00:22:23.393 Security Send/Receive: Not Supported 00:22:23.393 Format NVM: Not Supported 00:22:23.393 Firmware Activate/Download: Not Supported 00:22:23.393 Namespace Management: Not Supported 00:22:23.393 Device Self-Test: Not Supported 00:22:23.393 Directives: Not Supported 00:22:23.393 NVMe-MI: Not Supported 00:22:23.393 Virtualization Management: Not Supported 00:22:23.393 Doorbell Buffer Config: Not Supported 00:22:23.393 Get LBA Status Capability: Not Supported 00:22:23.393 Command & Feature Lockdown Capability: Not Supported 00:22:23.393 Abort Command Limit: 4 00:22:23.393 Async Event Request Limit: 4 00:22:23.393 Number of Firmware Slots: N/A 00:22:23.393 Firmware Slot 1 Read-Only: N/A 00:22:23.393 Firmware Activation Without Reset: N/A 00:22:23.393 Multiple Update Detection Support: N/A 00:22:23.393 Firmware Update Granularity: No Information Provided 00:22:23.393 Per-Namespace SMART Log: No 00:22:23.393 Asymmetric Namespace Access Log Page: Not Supported 00:22:23.393 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:23.393 Command Effects Log Page: Supported 00:22:23.393 Get Log Page Extended Data: Supported 00:22:23.393 Telemetry Log Pages: Not Supported 00:22:23.393 Persistent Event Log Pages: Not Supported 00:22:23.393 Supported Log Pages Log Page: May Support 00:22:23.393 Commands Supported & Effects Log Page: Not Supported 00:22:23.393 Feature Identifiers & Effects Log Page:May Support 00:22:23.393 NVMe-MI Commands & Effects Log Page: May Support 00:22:23.393 Data Area 4 for Telemetry Log: Not Supported 00:22:23.393 Error Log Page Entries Supported: 128 00:22:23.393 Keep Alive: Supported 00:22:23.393 Keep Alive Granularity: 10000 ms 00:22:23.393 00:22:23.393 NVM Command Set Attributes 00:22:23.393 ========================== 00:22:23.393 Submission Queue Entry Size 00:22:23.393 Max: 64 00:22:23.393 Min: 64 00:22:23.393 Completion Queue Entry Size 00:22:23.393 Max: 16 00:22:23.393 Min: 16 00:22:23.393 Number of Namespaces: 32 00:22:23.393 Compare Command: Supported 00:22:23.393 Write Uncorrectable Command: Not Supported 00:22:23.393 Dataset Management Command: Supported 00:22:23.393 Write Zeroes Command: Supported 00:22:23.393 Set Features Save Field: Not Supported 00:22:23.393 Reservations: Supported 00:22:23.393 Timestamp: Not Supported 00:22:23.393 Copy: Supported 00:22:23.393 Volatile Write Cache: Present 00:22:23.393 Atomic Write Unit (Normal): 1 00:22:23.393 Atomic Write Unit (PFail): 1 00:22:23.393 Atomic Compare & Write Unit: 1 00:22:23.393 Fused Compare & Write: Supported 00:22:23.393 Scatter-Gather List 00:22:23.393 SGL Command Set: Supported 00:22:23.393 SGL Keyed: Supported 00:22:23.393 SGL Bit Bucket Descriptor: Not Supported 00:22:23.393 SGL Metadata Pointer: Not Supported 00:22:23.393 Oversized SGL: Not Supported 00:22:23.393 SGL Metadata Address: Not Supported 00:22:23.393 SGL Offset: Supported 00:22:23.393 Transport SGL Data Block: Not Supported 00:22:23.393 Replay Protected Memory Block: Not Supported 00:22:23.393 00:22:23.393 Firmware Slot Information 00:22:23.393 ========================= 00:22:23.393 Active slot: 1 00:22:23.393 Slot 1 Firmware Revision: 24.09 00:22:23.393 00:22:23.393 00:22:23.393 Commands Supported and Effects 00:22:23.393 ============================== 00:22:23.393 Admin Commands 00:22:23.393 -------------- 00:22:23.393 Get Log Page (02h): Supported 00:22:23.393 Identify (06h): Supported 00:22:23.393 Abort (08h): Supported 00:22:23.393 Set Features (09h): Supported 00:22:23.393 Get Features (0Ah): Supported 00:22:23.393 Asynchronous Event Request (0Ch): Supported 00:22:23.393 Keep Alive (18h): Supported 00:22:23.393 I/O Commands 00:22:23.393 ------------ 00:22:23.393 Flush (00h): Supported LBA-Change 00:22:23.393 Write (01h): Supported LBA-Change 00:22:23.393 Read (02h): Supported 00:22:23.393 Compare (05h): Supported 00:22:23.393 Write Zeroes (08h): Supported LBA-Change 00:22:23.393 Dataset Management (09h): Supported LBA-Change 00:22:23.393 Copy (19h): Supported LBA-Change 00:22:23.393 00:22:23.393 Error Log 00:22:23.393 ========= 00:22:23.393 00:22:23.393 Arbitration 00:22:23.393 =========== 00:22:23.393 Arbitration Burst: 1 00:22:23.393 00:22:23.393 Power Management 00:22:23.393 ================ 00:22:23.393 Number of Power States: 1 00:22:23.393 Current Power State: Power State #0 00:22:23.393 Power State #0: 00:22:23.393 Max Power: 0.00 W 00:22:23.393 Non-Operational State: Operational 00:22:23.393 Entry Latency: Not Reported 00:22:23.393 Exit Latency: Not Reported 00:22:23.393 Relative Read Throughput: 0 00:22:23.393 Relative Read Latency: 0 00:22:23.393 Relative Write Throughput: 0 00:22:23.393 Relative Write Latency: 0 00:22:23.393 Idle Power: Not Reported 00:22:23.393 Active Power: Not Reported 00:22:23.393 Non-Operational Permissive Mode: Not Supported 00:22:23.393 00:22:23.393 Health Information 00:22:23.393 ================== 00:22:23.393 Critical Warnings: 00:22:23.393 Available Spare Space: OK 00:22:23.393 Temperature: OK 00:22:23.393 Device Reliability: OK 00:22:23.393 Read Only: No 00:22:23.393 Volatile Memory Backup: OK 00:22:23.393 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:23.393 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:23.393 Available Spare: 0% 00:22:23.393 Available Spare Threshold: 0% 00:22:23.393 Life Percentage Used:[2024-07-24 21:47:31.351118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.393 [2024-07-24 21:47:31.351124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x166eec0) 00:22:23.393 [2024-07-24 21:47:31.351130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.393 [2024-07-24 21:47:31.351144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f28c0, cid 7, qid 0 00:22:23.393 [2024-07-24 21:47:31.351578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.393 [2024-07-24 21:47:31.351584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.393 [2024-07-24 21:47:31.351587] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.393 [2024-07-24 21:47:31.351591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f28c0) on tqpair=0x166eec0 00:22:23.393 [2024-07-24 21:47:31.351618] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:23.393 [2024-07-24 21:47:31.351626] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f1e40) on tqpair=0x166eec0 00:22:23.393 [2024-07-24 21:47:31.351632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.393 [2024-07-24 21:47:31.351636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f1fc0) on tqpair=0x166eec0 00:22:23.393 [2024-07-24 21:47:31.351640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.393 [2024-07-24 21:47:31.351644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f2140) on tqpair=0x166eec0 00:22:23.393 [2024-07-24 21:47:31.351648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.393 [2024-07-24 21:47:31.351652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.393 [2024-07-24 21:47:31.351656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.393 [2024-07-24 21:47:31.351663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.393 [2024-07-24 21:47:31.351666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.393 [2024-07-24 21:47:31.351669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.393 [2024-07-24 21:47:31.351675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.393 [2024-07-24 21:47:31.351686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.393 [2024-07-24 21:47:31.351841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.393 [2024-07-24 21:47:31.351851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.393 [2024-07-24 21:47:31.351854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.393 [2024-07-24 21:47:31.351858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.393 [2024-07-24 21:47:31.351864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.393 [2024-07-24 21:47:31.351868] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.393 [2024-07-24 21:47:31.351871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.393 [2024-07-24 21:47:31.351877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.393 [2024-07-24 21:47:31.351894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.393 [2024-07-24 21:47:31.352064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.393 [2024-07-24 21:47:31.352074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.393 [2024-07-24 21:47:31.352077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.393 [2024-07-24 21:47:31.352081] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.393 [2024-07-24 21:47:31.352085] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:23.393 [2024-07-24 21:47:31.352089] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:23.393 [2024-07-24 21:47:31.352099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.393 [2024-07-24 21:47:31.352103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.393 [2024-07-24 21:47:31.352106] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.393 [2024-07-24 21:47:31.352113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.393 [2024-07-24 21:47:31.352125] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.393 [2024-07-24 21:47:31.352267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.393 [2024-07-24 21:47:31.352277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.393 [2024-07-24 21:47:31.352280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.393 [2024-07-24 21:47:31.352284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.393 [2024-07-24 21:47:31.352295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.393 [2024-07-24 21:47:31.352299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.393 [2024-07-24 21:47:31.352302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.393 [2024-07-24 21:47:31.352308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.393 [2024-07-24 21:47:31.352320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.393 [2024-07-24 21:47:31.352464] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.352474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.352477] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.352480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.352492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.352495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.352499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.394 [2024-07-24 21:47:31.352505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.394 [2024-07-24 21:47:31.352516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.394 [2024-07-24 21:47:31.352662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.352672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.352675] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.352678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.352689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.352693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.352696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.394 [2024-07-24 21:47:31.352702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.394 [2024-07-24 21:47:31.352717] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.394 [2024-07-24 21:47:31.352859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.352868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.352871] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.352875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.352886] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.352890] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.352893] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.394 [2024-07-24 21:47:31.352899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.394 [2024-07-24 21:47:31.352910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.394 [2024-07-24 21:47:31.353067] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.353077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.353080] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.353095] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353101] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.394 [2024-07-24 21:47:31.353108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.394 [2024-07-24 21:47:31.353120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.394 [2024-07-24 21:47:31.353265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.353275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.353278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.353292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.394 [2024-07-24 21:47:31.353305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.394 [2024-07-24 21:47:31.353317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.394 [2024-07-24 21:47:31.353463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.353473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.353476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.353490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.394 [2024-07-24 21:47:31.353503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.394 [2024-07-24 21:47:31.353517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.394 [2024-07-24 21:47:31.353657] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.353667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.353670] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.353684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353688] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.394 [2024-07-24 21:47:31.353698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.394 [2024-07-24 21:47:31.353709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.394 [2024-07-24 21:47:31.353850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.353860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.353863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.353877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.353884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.394 [2024-07-24 21:47:31.353890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.394 [2024-07-24 21:47:31.353901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.394 [2024-07-24 21:47:31.354050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.354061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.354064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.354078] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354082] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.394 [2024-07-24 21:47:31.354091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.394 [2024-07-24 21:47:31.354103] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.394 [2024-07-24 21:47:31.354250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.354260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.354263] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.354278] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.394 [2024-07-24 21:47:31.354291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.394 [2024-07-24 21:47:31.354302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.394 [2024-07-24 21:47:31.354447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.354456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.354459] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354463] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.354474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.394 [2024-07-24 21:47:31.354487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.394 [2024-07-24 21:47:31.354498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.394 [2024-07-24 21:47:31.354643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.354652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.354656] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354659] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.354670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.394 [2024-07-24 21:47:31.354683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.394 [2024-07-24 21:47:31.354694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.394 [2024-07-24 21:47:31.354839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.354849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.354852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.354866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.354873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.394 [2024-07-24 21:47:31.354879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.394 [2024-07-24 21:47:31.354890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.394 [2024-07-24 21:47:31.355037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.359054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.359059] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.359063] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.359074] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.359078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.359081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166eec0) 00:22:23.394 [2024-07-24 21:47:31.359088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.394 [2024-07-24 21:47:31.359101] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16f22c0, cid 3, qid 0 00:22:23.394 [2024-07-24 21:47:31.359333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.394 [2024-07-24 21:47:31.359347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.394 [2024-07-24 21:47:31.359350] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.394 [2024-07-24 21:47:31.359353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16f22c0) on tqpair=0x166eec0 00:22:23.394 [2024-07-24 21:47:31.359362] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:22:23.394 0% 00:22:23.394 Data Units Read: 0 00:22:23.394 Data Units Written: 0 00:22:23.394 Host Read Commands: 0 00:22:23.394 Host Write Commands: 0 00:22:23.394 Controller Busy Time: 0 minutes 00:22:23.394 Power Cycles: 0 00:22:23.394 Power On Hours: 0 hours 00:22:23.394 Unsafe Shutdowns: 0 00:22:23.394 Unrecoverable Media Errors: 0 00:22:23.394 Lifetime Error Log Entries: 0 00:22:23.394 Warning Temperature Time: 0 minutes 00:22:23.394 Critical Temperature Time: 0 minutes 00:22:23.394 00:22:23.394 Number of Queues 00:22:23.394 ================ 00:22:23.394 Number of I/O Submission Queues: 127 00:22:23.394 Number of I/O Completion Queues: 127 00:22:23.394 00:22:23.394 Active Namespaces 00:22:23.394 ================= 00:22:23.394 Namespace ID:1 00:22:23.394 Error Recovery Timeout: Unlimited 00:22:23.394 Command Set Identifier: NVM (00h) 00:22:23.394 Deallocate: Supported 00:22:23.394 Deallocated/Unwritten Error: Not Supported 00:22:23.394 Deallocated Read Value: Unknown 00:22:23.394 Deallocate in Write Zeroes: Not Supported 00:22:23.394 Deallocated Guard Field: 0xFFFF 00:22:23.394 Flush: Supported 00:22:23.394 Reservation: Supported 00:22:23.394 Namespace Sharing Capabilities: Multiple Controllers 00:22:23.394 Size (in LBAs): 131072 (0GiB) 00:22:23.394 Capacity (in LBAs): 131072 (0GiB) 00:22:23.394 Utilization (in LBAs): 131072 (0GiB) 00:22:23.394 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:23.394 EUI64: ABCDEF0123456789 00:22:23.394 UUID: 81f58ecb-0f47-4291-844e-4741c14e4c86 00:22:23.395 Thin Provisioning: Not Supported 00:22:23.395 Per-NS Atomic Units: Yes 00:22:23.395 Atomic Boundary Size (Normal): 0 00:22:23.395 Atomic Boundary Size (PFail): 0 00:22:23.395 Atomic Boundary Offset: 0 00:22:23.395 Maximum Single Source Range Length: 65535 00:22:23.395 Maximum Copy Length: 65535 00:22:23.395 Maximum Source Range Count: 1 00:22:23.395 NGUID/EUI64 Never Reused: No 00:22:23.395 Namespace Write Protected: No 00:22:23.395 Number of LBA Formats: 1 00:22:23.395 Current LBA Format: LBA Format #00 00:22:23.395 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:23.395 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.395 rmmod nvme_tcp 00:22:23.395 rmmod nvme_fabrics 00:22:23.395 rmmod nvme_keyring 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3136336 ']' 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3136336 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3136336 ']' 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3136336 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3136336 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3136336' 00:22:23.395 killing process with pid 3136336 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3136336 00:22:23.395 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3136336 00:22:23.654 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.654 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.654 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.654 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.654 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.654 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.654 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.654 21:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:26.194 00:22:26.194 real 0m9.120s 00:22:26.194 user 0m7.554s 00:22:26.194 sys 0m4.287s 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.194 ************************************ 00:22:26.194 END TEST nvmf_identify 00:22:26.194 ************************************ 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.194 ************************************ 00:22:26.194 START TEST nvmf_perf 00:22:26.194 ************************************ 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:26.194 * Looking for test storage... 00:22:26.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.194 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.195 21:47:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:31.499 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:31.499 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:31.499 Found net devices under 0000:86:00.0: cvl_0_0 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:31.499 Found net devices under 0000:86:00.1: cvl_0_1 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:31.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:22:31.499 00:22:31.499 --- 10.0.0.2 ping statistics --- 00:22:31.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.499 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:22:31.499 00:22:31.499 --- 10.0.0.1 ping statistics --- 00:22:31.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.499 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3139881 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3139881 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3139881 ']' 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.499 21:47:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:31.499 [2024-07-24 21:47:39.440419] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:22:31.499 [2024-07-24 21:47:39.440464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.499 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.499 [2024-07-24 21:47:39.501882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.499 [2024-07-24 21:47:39.583075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.499 [2024-07-24 21:47:39.583113] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.499 [2024-07-24 21:47:39.583121] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.499 [2024-07-24 21:47:39.583127] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.499 [2024-07-24 21:47:39.583132] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.499 [2024-07-24 21:47:39.583172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.499 [2024-07-24 21:47:39.583268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.499 [2024-07-24 21:47:39.583340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.499 [2024-07-24 21:47:39.583343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.478 21:47:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:32.478 21:47:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:32.478 21:47:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:32.478 21:47:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:32.478 21:47:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:32.478 21:47:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.478 21:47:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:32.478 21:47:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:35.772 21:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:35.772 21:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:35.772 21:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:35.772 21:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:35.772 21:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:35.772 21:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:35.772 21:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:35.772 21:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:35.772 21:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:35.772 [2024-07-24 21:47:43.847572] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.772 21:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:36.033 21:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:36.033 21:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:36.291 21:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:36.291 21:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:36.550 21:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:36.550 [2024-07-24 21:47:44.575802] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.550 21:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:36.810 21:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:36.810 21:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:36.810 21:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:36.810 21:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:38.187 Initializing NVMe Controllers 00:22:38.187 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:38.187 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:38.187 Initialization complete. Launching workers. 00:22:38.187 ======================================================== 00:22:38.187 Latency(us) 00:22:38.187 Device Information : IOPS MiB/s Average min max 00:22:38.187 PCIE (0000:5e:00.0) NSID 1 from core 0: 96983.23 378.84 329.59 34.25 7193.22 00:22:38.187 ======================================================== 00:22:38.187 Total : 96983.23 378.84 329.59 34.25 7193.22 00:22:38.187 00:22:38.187 21:47:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:38.187 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.568 Initializing NVMe Controllers 00:22:39.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:39.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:39.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:39.568 Initialization complete. Launching workers. 00:22:39.568 ======================================================== 00:22:39.568 Latency(us) 00:22:39.568 Device Information : IOPS MiB/s Average min max 00:22:39.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 89.68 0.35 11532.55 569.30 45478.38 00:22:39.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.82 0.20 19451.66 6711.84 47904.49 00:22:39.568 ======================================================== 00:22:39.568 Total : 141.50 0.55 14432.51 569.30 47904.49 00:22:39.568 00:22:39.568 21:47:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:39.568 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.508 Initializing NVMe Controllers 00:22:40.508 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:40.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:40.508 Initialization complete. Launching workers. 00:22:40.508 ======================================================== 00:22:40.508 Latency(us) 00:22:40.508 Device Information : IOPS MiB/s Average min max 00:22:40.508 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7876.87 30.77 4074.26 848.28 8538.39 00:22:40.508 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3815.94 14.91 8423.41 6233.73 16017.43 00:22:40.508 ======================================================== 00:22:40.508 Total : 11692.81 45.68 5493.60 848.28 16017.43 00:22:40.508 00:22:40.508 21:47:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:40.508 21:47:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:40.508 21:47:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:40.768 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.308 Initializing NVMe Controllers 00:22:43.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:43.308 Controller IO queue size 128, less than required. 00:22:43.308 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.308 Controller IO queue size 128, less than required. 00:22:43.308 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:43.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:43.308 Initialization complete. Launching workers. 00:22:43.308 ======================================================== 00:22:43.308 Latency(us) 00:22:43.308 Device Information : IOPS MiB/s Average min max 00:22:43.308 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 772.56 193.14 172916.80 108159.58 242421.28 00:22:43.308 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 534.69 133.67 245770.44 136945.77 361443.65 00:22:43.308 ======================================================== 00:22:43.308 Total : 1307.25 326.81 202715.50 108159.58 361443.65 00:22:43.308 00:22:43.308 21:47:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:43.308 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.569 No valid NVMe controllers or AIO or URING devices found 00:22:43.569 Initializing NVMe Controllers 00:22:43.569 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:43.569 Controller IO queue size 128, less than required. 00:22:43.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.569 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:43.569 Controller IO queue size 128, less than required. 00:22:43.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.569 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:43.569 WARNING: Some requested NVMe devices were skipped 00:22:43.569 21:47:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:43.569 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.108 Initializing NVMe Controllers 00:22:46.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:46.108 Controller IO queue size 128, less than required. 00:22:46.108 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.108 Controller IO queue size 128, less than required. 00:22:46.108 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:46.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:46.108 Initialization complete. Launching workers. 00:22:46.108 00:22:46.108 ==================== 00:22:46.108 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:46.108 TCP transport: 00:22:46.108 polls: 58381 00:22:46.108 idle_polls: 21485 00:22:46.108 sock_completions: 36896 00:22:46.108 nvme_completions: 3371 00:22:46.108 submitted_requests: 5104 00:22:46.108 queued_requests: 1 00:22:46.108 00:22:46.108 ==================== 00:22:46.108 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:46.108 TCP transport: 00:22:46.108 polls: 51502 00:22:46.108 idle_polls: 16066 00:22:46.108 sock_completions: 35436 00:22:46.108 nvme_completions: 3349 00:22:46.108 submitted_requests: 5038 00:22:46.108 queued_requests: 1 00:22:46.108 ======================================================== 00:22:46.108 Latency(us) 00:22:46.108 Device Information : IOPS MiB/s Average min max 00:22:46.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 842.50 210.62 157609.82 78676.26 307282.70 00:22:46.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 837.00 209.25 156319.46 84294.15 234264.00 00:22:46.108 ======================================================== 00:22:46.108 Total : 1679.49 419.87 156966.75 78676.26 307282.70 00:22:46.108 00:22:46.108 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:46.108 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.368 rmmod nvme_tcp 00:22:46.368 rmmod nvme_fabrics 00:22:46.368 rmmod nvme_keyring 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3139881 ']' 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3139881 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3139881 ']' 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3139881 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:46.368 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3139881 00:22:46.628 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:46.628 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:46.628 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3139881' 00:22:46.628 killing process with pid 3139881 00:22:46.628 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3139881 00:22:46.628 21:47:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3139881 00:22:48.009 21:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:48.009 21:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:48.009 21:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:48.009 21:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:48.009 21:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:48.009 21:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.009 21:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.009 21:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:50.551 00:22:50.551 real 0m24.254s 00:22:50.551 user 1m6.016s 00:22:50.551 sys 0m6.778s 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:50.551 ************************************ 00:22:50.551 END TEST nvmf_perf 00:22:50.551 ************************************ 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.551 ************************************ 00:22:50.551 START TEST nvmf_fio_host 00:22:50.551 ************************************ 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:50.551 * Looking for test storage... 00:22:50.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.551 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:50.552 21:47:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:55.861 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:55.861 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:55.861 Found net devices under 0000:86:00.0: cvl_0_0 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:55.861 Found net devices under 0000:86:00.1: cvl_0_1 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:55.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:22:55.861 00:22:55.861 --- 10.0.0.2 ping statistics --- 00:22:55.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.861 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.393 ms 00:22:55.861 00:22:55.861 --- 10.0.0.1 ping statistics --- 00:22:55.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.861 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.861 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3146103 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3146103 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3146103 ']' 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.862 21:48:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:55.862 [2024-07-24 21:48:03.346847] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:22:55.862 [2024-07-24 21:48:03.346888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.862 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.862 [2024-07-24 21:48:03.404722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.862 [2024-07-24 21:48:03.485763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.862 [2024-07-24 21:48:03.485799] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.862 [2024-07-24 21:48:03.485807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.862 [2024-07-24 21:48:03.485813] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.862 [2024-07-24 21:48:03.485819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.862 [2024-07-24 21:48:03.485858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.862 [2024-07-24 21:48:03.485874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.862 [2024-07-24 21:48:03.485962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.862 [2024-07-24 21:48:03.485963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.121 21:48:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.121 21:48:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:22:56.121 21:48:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:56.380 [2024-07-24 21:48:04.323685] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.380 21:48:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:56.380 21:48:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:56.380 21:48:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.380 21:48:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:56.640 Malloc1 00:22:56.640 21:48:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.899 21:48:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:56.899 21:48:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.159 [2024-07-24 21:48:05.118159] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.159 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:57.419 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:57.419 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:57.419 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:57.419 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:22:57.419 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:57.419 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:22:57.419 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:57.419 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:57.420 21:48:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:57.679 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:57.679 fio-3.35 00:22:57.679 Starting 1 thread 00:22:57.679 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.245 00:23:00.245 test: (groupid=0, jobs=1): err= 0: pid=3146700: Wed Jul 24 21:48:08 2024 00:23:00.245 read: IOPS=11.0k, BW=43.1MiB/s (45.2MB/s)(86.4MiB/2003msec) 00:23:00.245 slat (nsec): min=1619, max=248689, avg=1798.97, stdev=2355.14 00:23:00.245 clat (usec): min=3574, max=19258, avg=6774.83, stdev=1745.62 00:23:00.245 lat (usec): min=3576, max=19269, avg=6776.63, stdev=1745.82 00:23:00.245 clat percentiles (usec): 00:23:00.245 | 1.00th=[ 4424], 5.00th=[ 5014], 10.00th=[ 5342], 20.00th=[ 5735], 00:23:00.245 | 30.00th=[ 5932], 40.00th=[ 6128], 50.00th=[ 6325], 60.00th=[ 6521], 00:23:00.245 | 70.00th=[ 6849], 80.00th=[ 7373], 90.00th=[ 8717], 95.00th=[10683], 00:23:00.245 | 99.00th=[13435], 99.50th=[15008], 99.90th=[16909], 99.95th=[18220], 00:23:00.245 | 99.99th=[19268] 00:23:00.245 bw ( KiB/s): min=42000, max=44936, per=99.76%, avg=44048.00, stdev=1394.71, samples=4 00:23:00.245 iops : min=10500, max=11234, avg=11012.00, stdev=348.68, samples=4 00:23:00.245 write: IOPS=11.0k, BW=43.0MiB/s (45.1MB/s)(86.1MiB/2003msec); 0 zone resets 00:23:00.245 slat (nsec): min=1679, max=246963, avg=1890.33, stdev=1827.43 00:23:00.245 clat (usec): min=2018, max=17364, avg=4801.83, stdev=1040.25 00:23:00.245 lat (usec): min=2020, max=17370, avg=4803.72, stdev=1040.65 00:23:00.245 clat percentiles (usec): 00:23:00.245 | 1.00th=[ 2835], 5.00th=[ 3359], 10.00th=[ 3720], 20.00th=[ 4146], 00:23:00.245 | 30.00th=[ 4424], 40.00th=[ 4621], 50.00th=[ 4752], 60.00th=[ 4948], 00:23:00.245 | 70.00th=[ 5080], 80.00th=[ 5276], 90.00th=[ 5604], 95.00th=[ 6325], 00:23:00.245 | 99.00th=[ 8291], 99.50th=[ 9241], 99.90th=[15008], 99.95th=[15795], 00:23:00.245 | 99.99th=[16909] 00:23:00.245 bw ( KiB/s): min=42624, max=44656, per=99.96%, avg=44002.00, stdev=951.44, samples=4 00:23:00.245 iops : min=10656, max=11164, avg=11000.50, stdev=237.86, samples=4 00:23:00.245 lat (msec) : 4=7.95%, 10=88.72%, 20=3.33% 00:23:00.245 cpu : usr=69.38%, sys=24.23%, ctx=19, majf=0, minf=5 00:23:00.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:00.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:00.245 issued rwts: total=22109,22043,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.245 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:00.245 00:23:00.245 Run status group 0 (all jobs): 00:23:00.245 READ: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io=86.4MiB (90.6MB), run=2003-2003msec 00:23:00.245 WRITE: bw=43.0MiB/s (45.1MB/s), 43.0MiB/s-43.0MiB/s (45.1MB/s-45.1MB/s), io=86.1MiB (90.3MB), run=2003-2003msec 00:23:00.245 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:00.245 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:00.245 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:23:00.245 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:00.245 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:23:00.245 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:00.245 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:00.246 21:48:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:00.504 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:00.504 fio-3.35 00:23:00.504 Starting 1 thread 00:23:00.504 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.042 00:23:03.042 test: (groupid=0, jobs=1): err= 0: pid=3147427: Wed Jul 24 21:48:10 2024 00:23:03.042 read: IOPS=9047, BW=141MiB/s (148MB/s)(283MiB/2005msec) 00:23:03.042 slat (nsec): min=2593, max=90521, avg=2927.51, stdev=1369.79 00:23:03.042 clat (usec): min=2880, max=43333, avg=8709.12, stdev=3583.96 00:23:03.042 lat (usec): min=2882, max=43336, avg=8712.04, stdev=3584.34 00:23:03.042 clat percentiles (usec): 00:23:03.042 | 1.00th=[ 4080], 5.00th=[ 4948], 10.00th=[ 5538], 20.00th=[ 6390], 00:23:03.042 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8029], 60.00th=[ 8586], 00:23:03.042 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[11731], 95.00th=[13698], 00:23:03.042 | 99.00th=[25560], 99.50th=[26870], 99.90th=[28181], 99.95th=[29230], 00:23:03.042 | 99.99th=[41681] 00:23:03.042 bw ( KiB/s): min=59936, max=86336, per=49.22%, avg=71248.00, stdev=10998.83, samples=4 00:23:03.042 iops : min= 3746, max= 5396, avg=4453.00, stdev=687.43, samples=4 00:23:03.042 write: IOPS=5285, BW=82.6MiB/s (86.6MB/s)(145MiB/1760msec); 0 zone resets 00:23:03.042 slat (usec): min=30, max=259, avg=32.25, stdev= 5.78 00:23:03.042 clat (usec): min=4280, max=37335, avg=9522.72, stdev=3803.73 00:23:03.042 lat (usec): min=4312, max=37370, avg=9554.98, stdev=3806.33 00:23:03.042 clat percentiles (usec): 00:23:03.042 | 1.00th=[ 5997], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7570], 00:23:03.042 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9110], 00:23:03.042 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11338], 95.00th=[13435], 00:23:03.042 | 99.00th=[29492], 99.50th=[30802], 99.90th=[33817], 99.95th=[35390], 00:23:03.042 | 99.99th=[37487] 00:23:03.042 bw ( KiB/s): min=63104, max=89536, per=87.40%, avg=73912.00, stdev=11146.06, samples=4 00:23:03.042 iops : min= 3944, max= 5596, avg=4619.50, stdev=696.63, samples=4 00:23:03.042 lat (msec) : 4=0.54%, 10=77.83%, 20=18.86%, 50=2.77% 00:23:03.042 cpu : usr=84.04%, sys=12.62%, ctx=24, majf=0, minf=2 00:23:03.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:23:03.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:03.042 issued rwts: total=18141,9302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:03.042 00:23:03.042 Run status group 0 (all jobs): 00:23:03.042 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=283MiB (297MB), run=2005-2005msec 00:23:03.042 WRITE: bw=82.6MiB/s (86.6MB/s), 82.6MiB/s-82.6MiB/s (86.6MB/s-86.6MB/s), io=145MiB (152MB), run=1760-1760msec 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:03.042 rmmod nvme_tcp 00:23:03.042 rmmod nvme_fabrics 00:23:03.042 rmmod nvme_keyring 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3146103 ']' 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3146103 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3146103 ']' 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3146103 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:03.042 21:48:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3146103 00:23:03.042 21:48:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:03.042 21:48:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:03.042 21:48:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3146103' 00:23:03.042 killing process with pid 3146103 00:23:03.042 21:48:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3146103 00:23:03.042 21:48:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3146103 00:23:03.302 21:48:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:03.302 21:48:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:03.302 21:48:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:03.302 21:48:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.302 21:48:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:03.302 21:48:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.302 21:48:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.302 21:48:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.211 21:48:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:05.211 00:23:05.211 real 0m15.139s 00:23:05.212 user 0m46.963s 00:23:05.212 sys 0m5.828s 00:23:05.212 21:48:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:05.212 21:48:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.212 ************************************ 00:23:05.212 END TEST nvmf_fio_host 00:23:05.212 ************************************ 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.470 ************************************ 00:23:05.470 START TEST nvmf_failover 00:23:05.470 ************************************ 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:05.470 * Looking for test storage... 00:23:05.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.470 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.471 21:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:10.751 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:10.751 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:10.751 Found net devices under 0000:86:00.0: cvl_0_0 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:10.751 Found net devices under 0000:86:00.1: cvl_0_1 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.751 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:10.752 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.011 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.011 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.011 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:11.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:23:11.011 00:23:11.011 --- 10.0.0.2 ping statistics --- 00:23:11.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.011 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:23:11.011 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:23:11.011 00:23:11.011 --- 10.0.0.1 ping statistics --- 00:23:11.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.011 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:23:11.011 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.011 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:11.011 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3151410 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3151410 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3151410 ']' 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.012 21:48:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:11.012 [2024-07-24 21:48:19.018550] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:23:11.012 [2024-07-24 21:48:19.018594] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.012 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.012 [2024-07-24 21:48:19.074996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:11.272 [2024-07-24 21:48:19.154818] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.272 [2024-07-24 21:48:19.154851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.272 [2024-07-24 21:48:19.154858] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.272 [2024-07-24 21:48:19.154864] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.272 [2024-07-24 21:48:19.154869] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.272 [2024-07-24 21:48:19.154905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.272 [2024-07-24 21:48:19.154924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:11.272 [2024-07-24 21:48:19.154926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.841 21:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.841 21:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:11.841 21:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:11.841 21:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:11.841 21:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:11.841 21:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.841 21:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:12.100 [2024-07-24 21:48:20.004084] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.101 21:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:12.360 Malloc0 00:23:12.360 21:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:12.360 21:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:12.618 21:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.878 [2024-07-24 21:48:20.771654] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.878 21:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:12.878 [2024-07-24 21:48:20.964227] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:12.878 21:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:13.137 [2024-07-24 21:48:21.148812] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:13.137 21:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3151887 00:23:13.137 21:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:13.137 21:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.137 21:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3151887 /var/tmp/bdevperf.sock 00:23:13.138 21:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3151887 ']' 00:23:13.138 21:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.138 21:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.138 21:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.138 21:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.138 21:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:14.148 21:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.148 21:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:14.148 21:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:14.408 NVMe0n1 00:23:14.408 21:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:14.667 00:23:14.667 21:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:14.667 21:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3152123 00:23:14.667 21:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:15.613 21:48:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:15.613 [2024-07-24 21:48:23.724124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.613 [2024-07-24 21:48:23.724358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731f50 is same with the state(5) to be set 00:23:15.872 21:48:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:19.162 21:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:19.162 00:23:19.162 21:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:19.422 [2024-07-24 21:48:27.313460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.422 [2024-07-24 21:48:27.313809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.313998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 [2024-07-24 21:48:27.314296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732d70 is same with the state(5) to be set 00:23:19.423 21:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:22.716 21:48:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:22.716 [2024-07-24 21:48:30.511334] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.716 21:48:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:23.655 21:48:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:23.655 [2024-07-24 21:48:31.710017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.655 [2024-07-24 21:48:31.710210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 [2024-07-24 21:48:31.710399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecb40 is same with the state(5) to be set 00:23:23.656 21:48:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3152123 00:23:30.241 0 00:23:30.241 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3151887 00:23:30.241 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3151887 ']' 00:23:30.241 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3151887 00:23:30.241 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:30.241 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:30.241 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3151887 00:23:30.241 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:30.241 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:30.241 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3151887' 00:23:30.241 killing process with pid 3151887 00:23:30.241 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3151887 00:23:30.241 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3151887 00:23:30.241 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:30.241 [2024-07-24 21:48:21.220258] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:23:30.241 [2024-07-24 21:48:21.220316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3151887 ] 00:23:30.241 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.241 [2024-07-24 21:48:21.275019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.241 [2024-07-24 21:48:21.350063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.241 Running I/O for 15 seconds... 00:23:30.241 [2024-07-24 21:48:23.724797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.241 [2024-07-24 21:48:23.724834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.241 [2024-07-24 21:48:23.724850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.241 [2024-07-24 21:48:23.724859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.241 [2024-07-24 21:48:23.724868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.241 [2024-07-24 21:48:23.724876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.241 [2024-07-24 21:48:23.724886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.241 [2024-07-24 21:48:23.724893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.241 [2024-07-24 21:48:23.724902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.241 [2024-07-24 21:48:23.724908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.241 [2024-07-24 21:48:23.724917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.241 [2024-07-24 21:48:23.724924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.241 [2024-07-24 21:48:23.724932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.724940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.724949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.724956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.724964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.724971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.724980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.724986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.724993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.242 [2024-07-24 21:48:23.725087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.242 [2024-07-24 21:48:23.725103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.242 [2024-07-24 21:48:23.725118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.242 [2024-07-24 21:48:23.725133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.242 [2024-07-24 21:48:23.725149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.242 [2024-07-24 21:48:23.725165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.242 [2024-07-24 21:48:23.725179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.242 [2024-07-24 21:48:23.725193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.242 [2024-07-24 21:48:23.725438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.242 [2024-07-24 21:48:23.725452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.242 [2024-07-24 21:48:23.725467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.242 [2024-07-24 21:48:23.725483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.242 [2024-07-24 21:48:23.725497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.242 [2024-07-24 21:48:23.725511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.242 [2024-07-24 21:48:23.725527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.242 [2024-07-24 21:48:23.725535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.725690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.725705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.725721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.725736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.725752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.725767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.725785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.725800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.243 [2024-07-24 21:48:23.725918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.725932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.725946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.725961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.725977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.725985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.725993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.726002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.726009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.726017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.726024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.726032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.726038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.726051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.726058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.726066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.726077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.243 [2024-07-24 21:48:23.726085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.243 [2024-07-24 21:48:23.726092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.244 [2024-07-24 21:48:23.726535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.244 [2024-07-24 21:48:23.726555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.244 [2024-07-24 21:48:23.726572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.244 [2024-07-24 21:48:23.726589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.244 [2024-07-24 21:48:23.726606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.244 [2024-07-24 21:48:23.726622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.244 [2024-07-24 21:48:23.726637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.244 [2024-07-24 21:48:23.726651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.244 [2024-07-24 21:48:23.726666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.244 [2024-07-24 21:48:23.726674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.245 [2024-07-24 21:48:23.726680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:23.726689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.245 [2024-07-24 21:48:23.726695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:23.726703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.245 [2024-07-24 21:48:23.726710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:23.726718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.245 [2024-07-24 21:48:23.726724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:23.726733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.245 [2024-07-24 21:48:23.726739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:23.726747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.245 [2024-07-24 21:48:23.726756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:23.726765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.245 [2024-07-24 21:48:23.726773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:23.726781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.245 [2024-07-24 21:48:23.726787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:23.726808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.245 [2024-07-24 21:48:23.726815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.245 [2024-07-24 21:48:23.726821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97952 len:8 PRP1 0x0 PRP2 0x0 00:23:30.245 [2024-07-24 21:48:23.726828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:23.726871] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bd14b0 was disconnected and freed. reset controller. 00:23:30.245 [2024-07-24 21:48:23.726881] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:30.245 [2024-07-24 21:48:23.726901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.245 [2024-07-24 21:48:23.726909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:23.726916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.245 [2024-07-24 21:48:23.726923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:23.726932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.245 [2024-07-24 21:48:23.726940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:23.726947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.245 [2024-07-24 21:48:23.726953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:23.726960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:30.245 [2024-07-24 21:48:23.729834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:30.245 [2024-07-24 21:48:23.729863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bde540 (9): Bad file descriptor 00:23:30.245 [2024-07-24 21:48:23.892882] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:30.245 [2024-07-24 21:48:27.315561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.245 [2024-07-24 21:48:27.315920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.245 [2024-07-24 21:48:27.315927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.315935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.315942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.315951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.315958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.315966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.315974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.315982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.315989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.315997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.246 [2024-07-24 21:48:27.316372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.246 [2024-07-24 21:48:27.316380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.247 [2024-07-24 21:48:27.316388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.247 [2024-07-24 21:48:27.316404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.247 [2024-07-24 21:48:27.316419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.247 [2024-07-24 21:48:27.316434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.247 [2024-07-24 21:48:27.316449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.247 [2024-07-24 21:48:27.316464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.247 [2024-07-24 21:48:27.316479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.247 [2024-07-24 21:48:27.316493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.247 [2024-07-24 21:48:27.316508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.247 [2024-07-24 21:48:27.316522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.247 [2024-07-24 21:48:27.316537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.247 [2024-07-24 21:48:27.316552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.247 [2024-07-24 21:48:27.316674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.247 [2024-07-24 21:48:27.316939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.247 [2024-07-24 21:48:27.316947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.316954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.316962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.316969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.316977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.316983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.316991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.316998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.248 [2024-07-24 21:48:27.317407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.248 [2024-07-24 21:48:27.317433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63448 len:8 PRP1 0x0 PRP2 0x0 00:23:30.248 [2024-07-24 21:48:27.317440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.248 [2024-07-24 21:48:27.317454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.248 [2024-07-24 21:48:27.317460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63456 len:8 PRP1 0x0 PRP2 0x0 00:23:30.248 [2024-07-24 21:48:27.317466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.248 [2024-07-24 21:48:27.317478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.248 [2024-07-24 21:48:27.317484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63464 len:8 PRP1 0x0 PRP2 0x0 00:23:30.248 [2024-07-24 21:48:27.317490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.248 [2024-07-24 21:48:27.317502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.248 [2024-07-24 21:48:27.317508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63472 len:8 PRP1 0x0 PRP2 0x0 00:23:30.248 [2024-07-24 21:48:27.317514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.248 [2024-07-24 21:48:27.317525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.248 [2024-07-24 21:48:27.317530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63480 len:8 PRP1 0x0 PRP2 0x0 00:23:30.248 [2024-07-24 21:48:27.317537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.248 [2024-07-24 21:48:27.317545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.248 [2024-07-24 21:48:27.317550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.248 [2024-07-24 21:48:27.317556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63488 len:8 PRP1 0x0 PRP2 0x0 00:23:30.249 [2024-07-24 21:48:27.317563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:27.317570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.249 [2024-07-24 21:48:27.317575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.249 [2024-07-24 21:48:27.317581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63496 len:8 PRP1 0x0 PRP2 0x0 00:23:30.249 [2024-07-24 21:48:27.317587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:27.317594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.249 [2024-07-24 21:48:27.317599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.249 [2024-07-24 21:48:27.317605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63504 len:8 PRP1 0x0 PRP2 0x0 00:23:30.249 [2024-07-24 21:48:27.317611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:27.317619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.249 [2024-07-24 21:48:27.317624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.249 [2024-07-24 21:48:27.317629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63512 len:8 PRP1 0x0 PRP2 0x0 00:23:30.249 [2024-07-24 21:48:27.317636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:27.317676] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c023f0 was disconnected and freed. reset controller. 00:23:30.249 [2024-07-24 21:48:27.317686] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:30.249 [2024-07-24 21:48:27.317705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.249 [2024-07-24 21:48:27.317712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:27.317720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.249 [2024-07-24 21:48:27.317727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:27.317734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.249 [2024-07-24 21:48:27.317740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:27.317747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.249 [2024-07-24 21:48:27.317754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:27.317760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:30.249 [2024-07-24 21:48:27.317781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bde540 (9): Bad file descriptor 00:23:30.249 [2024-07-24 21:48:27.320629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:30.249 [2024-07-24 21:48:27.478649] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:30.249 [2024-07-24 21:48:31.710732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.249 [2024-07-24 21:48:31.710768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.710787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.249 [2024-07-24 21:48:31.710796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.710805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.249 [2024-07-24 21:48:31.710812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.710821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.249 [2024-07-24 21:48:31.710827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.710836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.249 [2024-07-24 21:48:31.710843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.710852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.249 [2024-07-24 21:48:31.710859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.710868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.710874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.710882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.710888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.710896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.710903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.710911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.710918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.710926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.710932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.710940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.710947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.710955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.710963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.710971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.710980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.710988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.710994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.711002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.711009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.711017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.711025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.711033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.711040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.711054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.711061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.711069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.711075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.711084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.711090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.711098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.711105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.711113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.711120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.711127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.711134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.711142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.711149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.249 [2024-07-24 21:48:31.711157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.249 [2024-07-24 21:48:31.711164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.250 [2024-07-24 21:48:31.711256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.250 [2024-07-24 21:48:31.711271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.250 [2024-07-24 21:48:31.711286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.250 [2024-07-24 21:48:31.711301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.250 [2024-07-24 21:48:31.711315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.250 [2024-07-24 21:48:31.711330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.250 [2024-07-24 21:48:31.711345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.250 [2024-07-24 21:48:31.711363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.250 [2024-07-24 21:48:31.711480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.250 [2024-07-24 21:48:31.711730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.250 [2024-07-24 21:48:31.711736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.251 [2024-07-24 21:48:31.711753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.251 [2024-07-24 21:48:31.711769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.251 [2024-07-24 21:48:31.711783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.251 [2024-07-24 21:48:31.711798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.251 [2024-07-24 21:48:31.711813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.251 [2024-07-24 21:48:31.711828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.251 [2024-07-24 21:48:31.711843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.711858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.711873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.711888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.711903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.711917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.711933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.711948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.711963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.711977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.711985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.711992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.712000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.712007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.712015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.712022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.712030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.712037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.712049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.712056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.712064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.712071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.712080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.712086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.712095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.712101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.712109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.712116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.712125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.712132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.712139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.712146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.712154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.712161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.712169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.712176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.712184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.251 [2024-07-24 21:48:31.712191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.251 [2024-07-24 21:48:31.712199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.251 [2024-07-24 21:48:31.712205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.252 [2024-07-24 21:48:31.712672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.252 [2024-07-24 21:48:31.712700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109296 len:8 PRP1 0x0 PRP2 0x0 00:23:30.252 [2024-07-24 21:48:31.712708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.252 [2024-07-24 21:48:31.712721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.252 [2024-07-24 21:48:31.712729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109304 len:8 PRP1 0x0 PRP2 0x0 00:23:30.252 [2024-07-24 21:48:31.712737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712778] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c020b0 was disconnected and freed. reset controller. 00:23:30.252 [2024-07-24 21:48:31.712787] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:30.252 [2024-07-24 21:48:31.712806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.252 [2024-07-24 21:48:31.712813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.252 [2024-07-24 21:48:31.712827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.252 [2024-07-24 21:48:31.712840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.252 [2024-07-24 21:48:31.712853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.252 [2024-07-24 21:48:31.712859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:30.253 [2024-07-24 21:48:31.715703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:30.253 [2024-07-24 21:48:31.715733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bde540 (9): Bad file descriptor 00:23:30.253 [2024-07-24 21:48:31.743116] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:30.253 00:23:30.253 Latency(us) 00:23:30.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.253 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:30.253 Verification LBA range: start 0x0 length 0x4000 00:23:30.253 NVMe0n1 : 15.00 10804.14 42.20 1081.08 0.00 10747.26 1275.10 27126.21 00:23:30.253 =================================================================================================================== 00:23:30.253 Total : 10804.14 42.20 1081.08 0.00 10747.26 1275.10 27126.21 00:23:30.253 Received shutdown signal, test time was about 15.000000 seconds 00:23:30.253 00:23:30.253 Latency(us) 00:23:30.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.253 =================================================================================================================== 00:23:30.253 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:30.253 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:30.253 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:30.253 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:30.253 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3154571 00:23:30.253 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:30.253 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3154571 /var/tmp/bdevperf.sock 00:23:30.253 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3154571 ']' 00:23:30.253 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.253 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.253 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.253 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.253 21:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:30.823 21:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.823 21:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:30.823 21:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:31.082 [2024-07-24 21:48:38.940126] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:31.082 21:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:31.082 [2024-07-24 21:48:39.124613] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:31.082 21:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:31.341 NVMe0n1 00:23:31.341 21:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:31.601 00:23:31.860 21:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:32.119 00:23:32.119 21:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:32.119 21:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:32.377 21:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:32.377 21:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:35.670 21:48:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:35.670 21:48:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:35.670 21:48:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3155525 00:23:35.670 21:48:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:35.670 21:48:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3155525 00:23:37.122 0 00:23:37.122 21:48:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:37.122 [2024-07-24 21:48:37.972526] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:23:37.122 [2024-07-24 21:48:37.972580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154571 ] 00:23:37.123 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.123 [2024-07-24 21:48:38.027645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.123 [2024-07-24 21:48:38.097779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.123 [2024-07-24 21:48:40.447380] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:37.123 [2024-07-24 21:48:40.447435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.123 [2024-07-24 21:48:40.447447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.123 [2024-07-24 21:48:40.447456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.123 [2024-07-24 21:48:40.447463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.123 [2024-07-24 21:48:40.447471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.123 [2024-07-24 21:48:40.447478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.123 [2024-07-24 21:48:40.447485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.123 [2024-07-24 21:48:40.447493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.123 [2024-07-24 21:48:40.447500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.123 [2024-07-24 21:48:40.447527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.123 [2024-07-24 21:48:40.447543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e17540 (9): Bad file descriptor 00:23:37.123 [2024-07-24 21:48:40.455908] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:37.123 Running I/O for 1 seconds... 00:23:37.123 00:23:37.123 Latency(us) 00:23:37.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.123 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:37.123 Verification LBA range: start 0x0 length 0x4000 00:23:37.123 NVMe0n1 : 1.01 10932.46 42.70 0.00 0.00 11658.66 2507.46 28607.89 00:23:37.123 =================================================================================================================== 00:23:37.123 Total : 10932.46 42.70 0.00 0.00 11658.66 2507.46 28607.89 00:23:37.123 21:48:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:37.123 21:48:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:37.123 21:48:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.123 21:48:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:37.123 21:48:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:37.383 21:48:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.642 21:48:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3154571 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3154571 ']' 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3154571 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3154571 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3154571' 00:23:40.935 killing process with pid 3154571 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3154571 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3154571 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:40.935 21:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:41.195 rmmod nvme_tcp 00:23:41.195 rmmod nvme_fabrics 00:23:41.195 rmmod nvme_keyring 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3151410 ']' 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3151410 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3151410 ']' 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3151410 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3151410 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3151410' 00:23:41.195 killing process with pid 3151410 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3151410 00:23:41.195 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3151410 00:23:41.455 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:41.455 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:41.455 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:41.455 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:41.455 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:41.455 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.455 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.455 21:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:43.994 00:23:43.994 real 0m38.163s 00:23:43.994 user 2m2.950s 00:23:43.994 sys 0m7.464s 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:43.994 ************************************ 00:23:43.994 END TEST nvmf_failover 00:23:43.994 ************************************ 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.994 ************************************ 00:23:43.994 START TEST nvmf_host_discovery 00:23:43.994 ************************************ 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:43.994 * Looking for test storage... 00:23:43.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.994 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:43.995 21:48:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:49.273 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:49.273 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:49.273 Found net devices under 0000:86:00.0: cvl_0_0 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:49.273 Found net devices under 0000:86:00.1: cvl_0_1 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.273 21:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.273 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:49.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:23:49.273 00:23:49.273 --- 10.0.0.2 ping statistics --- 00:23:49.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.273 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:23:49.273 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.390 ms 00:23:49.273 00:23:49.273 --- 10.0.0.1 ping statistics --- 00:23:49.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.273 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:23:49.273 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3159788 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3159788 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3159788 ']' 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.274 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.274 [2024-07-24 21:48:57.095308] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:23:49.274 [2024-07-24 21:48:57.095352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.274 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.274 [2024-07-24 21:48:57.151678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.274 [2024-07-24 21:48:57.230343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.274 [2024-07-24 21:48:57.230375] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.274 [2024-07-24 21:48:57.230382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.274 [2024-07-24 21:48:57.230388] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.274 [2024-07-24 21:48:57.230393] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.274 [2024-07-24 21:48:57.230410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.842 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.842 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:49.842 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:49.842 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:49.842 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.842 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.842 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:49.842 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.843 [2024-07-24 21:48:57.925054] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.843 [2024-07-24 21:48:57.937197] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.843 null0 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.843 null1 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.843 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.103 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.103 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3160033 00:23:50.103 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:50.103 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3160033 /tmp/host.sock 00:23:50.103 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3160033 ']' 00:23:50.103 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:50.103 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.103 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:50.103 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:50.103 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.103 21:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.103 [2024-07-24 21:48:58.013100] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:23:50.103 [2024-07-24 21:48:58.013142] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160033 ] 00:23:50.103 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.103 [2024-07-24 21:48:58.066700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.103 [2024-07-24 21:48:58.146032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.043 21:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.043 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.043 [2024-07-24 21:48:59.156417] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.303 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.304 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.304 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.304 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.304 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:51.304 21:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:51.873 [2024-07-24 21:48:59.886281] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:51.873 [2024-07-24 21:48:59.886299] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:51.873 [2024-07-24 21:48:59.886314] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:51.873 [2024-07-24 21:48:59.973578] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:52.132 [2024-07-24 21:49:00.038687] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:52.132 [2024-07-24 21:49:00.038707] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:52.391 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:52.391 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:52.391 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:52.391 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:52.391 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:52.391 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.391 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:52.391 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.391 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:52.391 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.391 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.391 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:52.391 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:52.392 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:52.651 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:52.652 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:52.652 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.911 [2024-07-24 21:49:00.820950] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:52.911 [2024-07-24 21:49:00.821667] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:52.911 [2024-07-24 21:49:00.821690] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.911 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.912 [2024-07-24 21:49:00.951385] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:52.912 21:49:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:53.171 [2024-07-24 21:49:01.051277] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:53.171 [2024-07-24 21:49:01.051293] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:53.171 [2024-07-24 21:49:01.051298] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:54.111 21:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:54.111 21:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:54.111 21:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:54.111 21:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:54.111 21:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:54.111 21:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:54.112 21:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.112 21:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.112 21:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:54.112 21:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.112 [2024-07-24 21:49:02.068717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.112 [2024-07-24 21:49:02.068743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.112 [2024-07-24 21:49:02.068753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.112 [2024-07-24 21:49:02.068760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.112 [2024-07-24 21:49:02.068768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.112 [2024-07-24 21:49:02.068776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.112 [2024-07-24 21:49:02.068784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.112 [2024-07-24 21:49:02.068790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.112 [2024-07-24 21:49:02.068797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00f30 is same with the state(5) to be set 00:23:54.112 [2024-07-24 21:49:02.069072] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:54.112 [2024-07-24 21:49:02.069086] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.112 [2024-07-24 21:49:02.078727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e00f30 (9): Bad file descriptor 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.112 [2024-07-24 21:49:02.088766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.112 [2024-07-24 21:49:02.089231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.112 [2024-07-24 21:49:02.089250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e00f30 with addr=10.0.0.2, port=4420 00:23:54.112 [2024-07-24 21:49:02.089259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00f30 is same with the state(5) to be set 00:23:54.112 [2024-07-24 21:49:02.089275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e00f30 (9): Bad file descriptor 00:23:54.112 [2024-07-24 21:49:02.089303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.112 [2024-07-24 21:49:02.089311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.112 [2024-07-24 21:49:02.089320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.112 [2024-07-24 21:49:02.089332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.112 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.112 [2024-07-24 21:49:02.098828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.112 [2024-07-24 21:49:02.099340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.112 [2024-07-24 21:49:02.099356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e00f30 with addr=10.0.0.2, port=4420 00:23:54.112 [2024-07-24 21:49:02.099364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00f30 is same with the state(5) to be set 00:23:54.112 [2024-07-24 21:49:02.099378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e00f30 (9): Bad file descriptor 00:23:54.112 [2024-07-24 21:49:02.099395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.112 [2024-07-24 21:49:02.099403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.112 [2024-07-24 21:49:02.099411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.112 [2024-07-24 21:49:02.099421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.112 [2024-07-24 21:49:02.108883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.112 [2024-07-24 21:49:02.109380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.112 [2024-07-24 21:49:02.109394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e00f30 with addr=10.0.0.2, port=4420 00:23:54.112 [2024-07-24 21:49:02.109403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00f30 is same with the state(5) to be set 00:23:54.112 [2024-07-24 21:49:02.109415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e00f30 (9): Bad file descriptor 00:23:54.112 [2024-07-24 21:49:02.109439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.112 [2024-07-24 21:49:02.109447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.112 [2024-07-24 21:49:02.109455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.112 [2024-07-24 21:49:02.109465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.112 [2024-07-24 21:49:02.118935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.112 [2024-07-24 21:49:02.119382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.112 [2024-07-24 21:49:02.119397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e00f30 with addr=10.0.0.2, port=4420 00:23:54.113 [2024-07-24 21:49:02.119405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00f30 is same with the state(5) to be set 00:23:54.113 [2024-07-24 21:49:02.119416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e00f30 (9): Bad file descriptor 00:23:54.113 [2024-07-24 21:49:02.119427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.113 [2024-07-24 21:49:02.119434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.113 [2024-07-24 21:49:02.119444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.113 [2024-07-24 21:49:02.119454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:54.113 [2024-07-24 21:49:02.128990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.113 [2024-07-24 21:49:02.129489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.113 [2024-07-24 21:49:02.129503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e00f30 with addr=10.0.0.2, port=4420 00:23:54.113 [2024-07-24 21:49:02.129510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00f30 is same with the state(5) to be set 00:23:54.113 [2024-07-24 21:49:02.129521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e00f30 (9): Bad file descriptor 00:23:54.113 [2024-07-24 21:49:02.129542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.113 [2024-07-24 21:49:02.129551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.113 [2024-07-24 21:49:02.129558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.113 [2024-07-24 21:49:02.129568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.113 [2024-07-24 21:49:02.139048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.113 [2024-07-24 21:49:02.139229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.113 [2024-07-24 21:49:02.139243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e00f30 with addr=10.0.0.2, port=4420 00:23:54.113 [2024-07-24 21:49:02.139251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00f30 is same with the state(5) to be set 00:23:54.113 [2024-07-24 21:49:02.139263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e00f30 (9): Bad file descriptor 00:23:54.113 [2024-07-24 21:49:02.139281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.113 [2024-07-24 21:49:02.139288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.113 [2024-07-24 21:49:02.139299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.113 [2024-07-24 21:49:02.139310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.113 [2024-07-24 21:49:02.149103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.113 [2024-07-24 21:49:02.149480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.113 [2024-07-24 21:49:02.149493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e00f30 with addr=10.0.0.2, port=4420 00:23:54.113 [2024-07-24 21:49:02.149502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00f30 is same with the state(5) to be set 00:23:54.113 [2024-07-24 21:49:02.149512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e00f30 (9): Bad file descriptor 00:23:54.113 [2024-07-24 21:49:02.149529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.113 [2024-07-24 21:49:02.149537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.113 [2024-07-24 21:49:02.149544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.113 [2024-07-24 21:49:02.149553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.113 [2024-07-24 21:49:02.156553] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:54.113 [2024-07-24 21:49:02.156569] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.113 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.373 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:54.374 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:54.374 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.374 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.374 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.374 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.374 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:54.374 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:54.374 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:54.374 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:54.374 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:54.374 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.374 21:49:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.312 [2024-07-24 21:49:03.424344] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:55.312 [2024-07-24 21:49:03.424360] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:55.312 [2024-07-24 21:49:03.424373] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:55.571 [2024-07-24 21:49:03.510630] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:55.829 [2024-07-24 21:49:03.825244] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:55.829 [2024-07-24 21:49:03.825270] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.829 request: 00:23:55.829 { 00:23:55.829 "name": "nvme", 00:23:55.829 "trtype": "tcp", 00:23:55.829 "traddr": "10.0.0.2", 00:23:55.829 "adrfam": "ipv4", 00:23:55.829 "trsvcid": "8009", 00:23:55.829 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:55.829 "wait_for_attach": true, 00:23:55.829 "method": "bdev_nvme_start_discovery", 00:23:55.829 "req_id": 1 00:23:55.829 } 00:23:55.829 Got JSON-RPC error response 00:23:55.829 response: 00:23:55.829 { 00:23:55.829 "code": -17, 00:23:55.829 "message": "File exists" 00:23:55.829 } 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:55.829 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.088 request: 00:23:56.088 { 00:23:56.088 "name": "nvme_second", 00:23:56.088 "trtype": "tcp", 00:23:56.088 "traddr": "10.0.0.2", 00:23:56.088 "adrfam": "ipv4", 00:23:56.088 "trsvcid": "8009", 00:23:56.088 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:56.088 "wait_for_attach": true, 00:23:56.088 "method": "bdev_nvme_start_discovery", 00:23:56.088 "req_id": 1 00:23:56.088 } 00:23:56.088 Got JSON-RPC error response 00:23:56.088 response: 00:23:56.088 { 00:23:56.088 "code": -17, 00:23:56.088 "message": "File exists" 00:23:56.088 } 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:56.088 21:49:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.088 21:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.096 [2024-07-24 21:49:05.068996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.096 [2024-07-24 21:49:05.069024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32520 with addr=10.0.0.2, port=8010 00:23:57.096 [2024-07-24 21:49:05.069038] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:57.096 [2024-07-24 21:49:05.069049] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:57.096 [2024-07-24 21:49:05.069056] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:58.034 [2024-07-24 21:49:06.071374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.034 [2024-07-24 21:49:06.071398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e322a0 with addr=10.0.0.2, port=8010 00:23:58.034 [2024-07-24 21:49:06.071409] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:58.034 [2024-07-24 21:49:06.071416] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:58.034 [2024-07-24 21:49:06.071422] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:58.973 [2024-07-24 21:49:07.073349] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:58.973 request: 00:23:58.973 { 00:23:58.973 "name": "nvme_second", 00:23:58.973 "trtype": "tcp", 00:23:58.973 "traddr": "10.0.0.2", 00:23:58.973 "adrfam": "ipv4", 00:23:58.973 "trsvcid": "8010", 00:23:58.973 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:58.973 "wait_for_attach": false, 00:23:58.973 "attach_timeout_ms": 3000, 00:23:58.973 "method": "bdev_nvme_start_discovery", 00:23:58.973 "req_id": 1 00:23:58.973 } 00:23:58.973 Got JSON-RPC error response 00:23:58.973 response: 00:23:58.973 { 00:23:58.973 "code": -110, 00:23:58.973 "message": "Connection timed out" 00:23:58.973 } 00:23:58.973 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:58.973 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:58.973 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:58.973 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:58.973 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:58.973 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:58.973 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:58.973 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:58.973 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:58.973 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:58.973 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.973 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3160033 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.232 rmmod nvme_tcp 00:23:59.232 rmmod nvme_fabrics 00:23:59.232 rmmod nvme_keyring 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3159788 ']' 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3159788 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3159788 ']' 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3159788 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3159788 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3159788' 00:23:59.232 killing process with pid 3159788 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3159788 00:23:59.232 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3159788 00:23:59.492 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:59.492 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:59.492 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:59.492 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.492 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.492 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.492 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.492 21:49:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.400 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:01.400 00:24:01.400 real 0m17.912s 00:24:01.400 user 0m22.740s 00:24:01.400 sys 0m5.339s 00:24:01.400 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:01.400 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.400 ************************************ 00:24:01.400 END TEST nvmf_host_discovery 00:24:01.400 ************************************ 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.660 ************************************ 00:24:01.660 START TEST nvmf_host_multipath_status 00:24:01.660 ************************************ 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:01.660 * Looking for test storage... 00:24:01.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.660 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.661 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.661 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.661 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:01.661 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:01.661 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:01.661 21:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:06.936 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.936 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:06.937 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:06.937 Found net devices under 0000:86:00.0: cvl_0_0 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:06.937 Found net devices under 0000:86:00.1: cvl_0_1 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:06.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:24:06.937 00:24:06.937 --- 10.0.0.2 ping statistics --- 00:24:06.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.937 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:24:06.937 00:24:06.937 --- 10.0.0.1 ping statistics --- 00:24:06.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.937 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3165006 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3165006 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3165006 ']' 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:06.937 21:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:06.937 [2024-07-24 21:49:14.747194] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:24:06.937 [2024-07-24 21:49:14.747237] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.937 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.937 [2024-07-24 21:49:14.803439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:06.937 [2024-07-24 21:49:14.884012] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.937 [2024-07-24 21:49:14.884054] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.937 [2024-07-24 21:49:14.884061] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.937 [2024-07-24 21:49:14.884067] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.937 [2024-07-24 21:49:14.884072] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.937 [2024-07-24 21:49:14.884116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.937 [2024-07-24 21:49:14.884118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.506 21:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:07.506 21:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:07.506 21:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.506 21:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:07.506 21:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:07.506 21:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.506 21:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3165006 00:24:07.506 21:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:07.765 [2024-07-24 21:49:15.736658] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.765 21:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:08.025 Malloc0 00:24:08.025 21:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:08.025 21:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.284 21:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.544 [2024-07-24 21:49:16.444964] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.544 21:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:08.544 [2024-07-24 21:49:16.617434] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:08.544 21:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3165368 00:24:08.544 21:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:08.544 21:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:08.544 21:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3165368 /var/tmp/bdevperf.sock 00:24:08.544 21:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3165368 ']' 00:24:08.544 21:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.544 21:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.544 21:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.544 21:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.544 21:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:09.483 21:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.483 21:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:09.483 21:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:09.742 21:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:10.001 Nvme0n1 00:24:10.001 21:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:10.261 Nvme0n1 00:24:10.261 21:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:10.261 21:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:12.800 21:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:12.800 21:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:12.800 21:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:12.800 21:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:13.739 21:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:13.739 21:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:13.739 21:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.739 21:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.999 21:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.999 21:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:13.999 21:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.999 21:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:13.999 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.999 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:13.999 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.999 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:14.259 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.259 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:14.259 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.259 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.519 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.519 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:14.519 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.519 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.519 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.519 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:14.519 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.519 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.779 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.779 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:14.779 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:15.039 21:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:15.298 21:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:16.275 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:16.275 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:16.275 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.275 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.564 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.564 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:16.564 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.564 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.564 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.564 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.564 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.564 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.824 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.824 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.824 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.824 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.824 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.824 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.824 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.824 21:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.084 21:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.084 21:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:17.084 21:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.084 21:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.344 21:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.344 21:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:17.344 21:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:17.604 21:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:17.604 21:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:18.985 21:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:18.985 21:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:18.985 21:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.985 21:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.985 21:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.985 21:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:18.985 21:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.985 21:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:18.985 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:18.985 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:18.985 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.985 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:19.245 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.245 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:19.245 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.245 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.504 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.504 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:19.504 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.504 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:19.764 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.764 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:19.764 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.764 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:19.764 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.764 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:19.764 21:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:20.024 21:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:20.284 21:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:21.224 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:21.224 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:21.224 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.224 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.483 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.483 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:21.483 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.483 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:21.744 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.744 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:21.744 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.744 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:21.744 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.744 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:21.744 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.744 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.003 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.003 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:22.003 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.003 21:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.264 21:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.264 21:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:22.264 21:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.264 21:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.264 21:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.264 21:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:22.264 21:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:22.524 21:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:22.783 21:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:23.722 21:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:23.722 21:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:23.722 21:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.722 21:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:23.981 21:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:23.981 21:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:23.981 21:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.981 21:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:24.241 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.241 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:24.241 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.241 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:24.241 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.241 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:24.241 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:24.241 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.501 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.501 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:24.501 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.501 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:24.761 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.761 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:24.761 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.761 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:24.761 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.761 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:24.761 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:25.020 21:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:25.279 21:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:26.218 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:26.218 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:26.218 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.218 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:26.478 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.478 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:26.478 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.478 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:26.478 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.478 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:26.478 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.478 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:26.739 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.739 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:26.739 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.739 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:26.999 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.999 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:26.999 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.999 21:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:26.999 21:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.999 21:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:26.999 21:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.999 21:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:27.259 21:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.259 21:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:27.518 21:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:27.518 21:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:27.778 21:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:27.778 21:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:29.161 21:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:29.161 21:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:29.161 21:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.161 21:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:29.161 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.161 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:29.161 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.161 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:29.162 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.162 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:29.162 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.162 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:29.420 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.420 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:29.420 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:29.420 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.679 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.679 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:29.679 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.679 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:29.938 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.938 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:29.938 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.938 21:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:29.938 21:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.938 21:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:29.938 21:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:30.197 21:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:30.457 21:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:31.397 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:31.397 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:31.397 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.397 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:31.714 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:31.714 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:31.714 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.715 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:31.715 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.715 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:31.715 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.715 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:31.979 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.979 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:31.980 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.980 21:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:32.239 21:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.239 21:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:32.239 21:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.239 21:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:32.239 21:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.239 21:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:32.239 21:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.239 21:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:32.499 21:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.499 21:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:32.499 21:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:32.759 21:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:33.018 21:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:33.959 21:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:33.959 21:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:33.959 21:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.959 21:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:34.220 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.220 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:34.220 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.220 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:34.220 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.220 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:34.220 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.220 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:34.480 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.480 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:34.480 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.480 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:34.740 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.740 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:34.740 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.740 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:35.000 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.000 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:35.000 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.000 21:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:35.000 21:49:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.000 21:49:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:35.000 21:49:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:35.260 21:49:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:35.519 21:49:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:36.458 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:36.459 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:36.459 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.459 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:36.719 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.719 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:36.719 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.719 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:36.719 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:36.719 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:36.719 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.719 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:36.979 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.979 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:36.979 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.979 21:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:37.239 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.239 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:37.239 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.239 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:37.239 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.239 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:37.239 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.239 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:37.499 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.499 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3165368 00:24:37.499 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3165368 ']' 00:24:37.499 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3165368 00:24:37.499 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:37.499 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:37.499 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3165368 00:24:37.499 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:37.499 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:37.499 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3165368' 00:24:37.500 killing process with pid 3165368 00:24:37.500 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3165368 00:24:37.500 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3165368 00:24:37.763 Connection closed with partial response: 00:24:37.763 00:24:37.763 00:24:37.763 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3165368 00:24:37.763 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:37.763 [2024-07-24 21:49:16.677794] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:24:37.763 [2024-07-24 21:49:16.677844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3165368 ] 00:24:37.763 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.763 [2024-07-24 21:49:16.728424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.763 [2024-07-24 21:49:16.800766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.763 Running I/O for 90 seconds... 00:24:37.763 [2024-07-24 21:49:30.532095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.763 [2024-07-24 21:49:30.532137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.763 [2024-07-24 21:49:30.532182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.763 [2024-07-24 21:49:30.532203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.763 [2024-07-24 21:49:30.532222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.763 [2024-07-24 21:49:30.532702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:37.763 [2024-07-24 21:49:30.532714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.532720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.532732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.532739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.532751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.532757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.532770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.532777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.533034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.533056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.533072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.533079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.533094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.533101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.533118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.533125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.533139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.764 [2024-07-24 21:49:30.533146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.533161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.764 [2024-07-24 21:49:30.533168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.764 [2024-07-24 21:49:30.534477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.764 [2024-07-24 21:49:30.534504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.764 [2024-07-24 21:49:30.534529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.764 [2024-07-24 21:49:30.534555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.764 [2024-07-24 21:49:30.534580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.534607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.534631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.534656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.534680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.534711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.534736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.534763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.534787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.534812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.534837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.534863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.534890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.534915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.534995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.535005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.535025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.535032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.535057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.535065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.535084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.535093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.535113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.535120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.535140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.535146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.535165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.535173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.535192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.535199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.535218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.535226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.535245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.535252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:37.764 [2024-07-24 21:49:30.535271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.764 [2024-07-24 21:49:30.535279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:30.535298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:30.535305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:30.535324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:30.535331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:30.535350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:30.535356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:30.535375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:30.535383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:30.535402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:30.535410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:30.535429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:30.535436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:30.535455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:30.535462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:30.535481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:30.535488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:30.535507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:30.535515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:30.535534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:30.535542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:30.535561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:30.535568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.398726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.765 [2024-07-24 21:49:43.398766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.398802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.398811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.398825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.398833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.398846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.398854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.398867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.398874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.398887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.398895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.398912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.398920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.399167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.765 [2024-07-24 21:49:43.399191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.765 [2024-07-24 21:49:43.399212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.399232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.399254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.399274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.399294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.399314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.399334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.399354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.399375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.399398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.765 [2024-07-24 21:49:43.399419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.399934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.399957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.399978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.399990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.399997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.400011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.400018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.400031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.400038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.400057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.765 [2024-07-24 21:49:43.400064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:37.765 [2024-07-24 21:49:43.400077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.766 [2024-07-24 21:49:43.400409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.766 [2024-07-24 21:49:43.400551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.766 [2024-07-24 21:49:43.400571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.766 [2024-07-24 21:49:43.400592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.400975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.400987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.401001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.401008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.401021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.401028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.401041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.401056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.401069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.401076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.401088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.401096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.401108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.401116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.401128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.401135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:37.766 [2024-07-24 21:49:43.401148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.766 [2024-07-24 21:49:43.401157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:37.766 Received shutdown signal, test time was about 27.098395 seconds 00:24:37.766 00:24:37.766 Latency(us) 00:24:37.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.767 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:37.767 Verification LBA range: start 0x0 length 0x4000 00:24:37.767 Nvme0n1 : 27.10 10478.07 40.93 0.00 0.00 12194.28 594.81 3019898.88 00:24:37.767 =================================================================================================================== 00:24:37.767 Total : 10478.07 40.93 0.00 0.00 12194.28 594.81 3019898.88 00:24:37.767 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:38.027 rmmod nvme_tcp 00:24:38.027 rmmod nvme_fabrics 00:24:38.027 rmmod nvme_keyring 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3165006 ']' 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3165006 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3165006 ']' 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3165006 00:24:38.027 21:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:38.027 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:38.027 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3165006 00:24:38.027 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:38.027 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:38.027 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3165006' 00:24:38.027 killing process with pid 3165006 00:24:38.027 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3165006 00:24:38.027 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3165006 00:24:38.288 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:38.288 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:38.288 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:38.288 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.288 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:38.288 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.288 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.288 21:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.195 21:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:40.456 00:24:40.456 real 0m38.735s 00:24:40.457 user 1m45.657s 00:24:40.457 sys 0m10.256s 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:40.457 ************************************ 00:24:40.457 END TEST nvmf_host_multipath_status 00:24:40.457 ************************************ 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.457 ************************************ 00:24:40.457 START TEST nvmf_discovery_remove_ifc 00:24:40.457 ************************************ 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:40.457 * Looking for test storage... 00:24:40.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:40.457 21:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:45.739 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:45.739 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:45.739 Found net devices under 0000:86:00.0: cvl_0_0 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:45.739 Found net devices under 0000:86:00.1: cvl_0_1 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.739 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.740 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.740 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:45.740 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.740 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.740 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.740 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:45.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:24:45.740 00:24:45.740 --- 10.0.0.2 ping statistics --- 00:24:45.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.740 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:24:45.740 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:24:46.000 00:24:46.000 --- 10.0.0.1 ping statistics --- 00:24:46.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.000 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3173668 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3173668 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3173668 ']' 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.000 21:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.000 [2024-07-24 21:49:53.938106] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:24:46.000 [2024-07-24 21:49:53.938149] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.000 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.000 [2024-07-24 21:49:53.995419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.000 [2024-07-24 21:49:54.074967] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.000 [2024-07-24 21:49:54.075001] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.000 [2024-07-24 21:49:54.075009] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.000 [2024-07-24 21:49:54.075018] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.000 [2024-07-24 21:49:54.075023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.000 [2024-07-24 21:49:54.075047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.942 [2024-07-24 21:49:54.790625] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.942 [2024-07-24 21:49:54.798747] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:46.942 null0 00:24:46.942 [2024-07-24 21:49:54.830767] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3173914 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3173914 /tmp/host.sock 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3173914 ']' 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:46.942 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.942 21:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.942 [2024-07-24 21:49:54.895924] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:24:46.942 [2024-07-24 21:49:54.895964] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3173914 ] 00:24:46.942 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.942 [2024-07-24 21:49:54.949007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.942 [2024-07-24 21:49:55.030376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.882 21:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:47.882 21:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:47.882 21:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:47.882 21:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:47.882 21:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.882 21:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.882 21:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.882 21:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:47.882 21:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.882 21:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.882 21:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.882 21:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:47.882 21:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.882 21:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.822 [2024-07-24 21:49:56.800087] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:48.822 [2024-07-24 21:49:56.800106] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:48.822 [2024-07-24 21:49:56.800121] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:48.822 [2024-07-24 21:49:56.888388] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:49.083 [2024-07-24 21:49:57.073089] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:49.083 [2024-07-24 21:49:57.073134] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:49.083 [2024-07-24 21:49:57.073154] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:49.083 [2024-07-24 21:49:57.073165] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:49.083 [2024-07-24 21:49:57.073183] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:49.083 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.083 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:49.083 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.083 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.083 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.083 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.083 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.083 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.083 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.083 [2024-07-24 21:49:57.079725] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xce0e60 was disconnected and freed. delete nvme_qpair. 00:24:49.083 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.083 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:49.083 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:49.083 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:49.343 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:49.343 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.343 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.343 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.343 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.343 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.343 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.343 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.343 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.343 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:49.343 21:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:50.314 21:49:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:50.314 21:49:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.314 21:49:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:50.314 21:49:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.314 21:49:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:50.314 21:49:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.314 21:49:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:50.314 21:49:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.314 21:49:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:50.314 21:49:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:51.256 21:49:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:51.256 21:49:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.256 21:49:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:51.256 21:49:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:51.256 21:49:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.256 21:49:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.256 21:49:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:51.256 21:49:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.256 21:49:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:51.256 21:49:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:52.638 21:50:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:52.638 21:50:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.638 21:50:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:52.638 21:50:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.638 21:50:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:52.638 21:50:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.638 21:50:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:52.638 21:50:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.638 21:50:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:52.638 21:50:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:53.577 21:50:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:53.577 21:50:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.577 21:50:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:53.577 21:50:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.577 21:50:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:53.577 21:50:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.577 21:50:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:53.577 21:50:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.577 21:50:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:53.577 21:50:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:54.517 21:50:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:54.517 21:50:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.517 21:50:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:54.517 21:50:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.517 21:50:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:54.517 21:50:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.517 21:50:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:54.517 21:50:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.517 [2024-07-24 21:50:02.514211] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:54.517 [2024-07-24 21:50:02.514254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.517 [2024-07-24 21:50:02.514266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.517 [2024-07-24 21:50:02.514275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.517 [2024-07-24 21:50:02.514282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.517 [2024-07-24 21:50:02.514290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.517 [2024-07-24 21:50:02.514297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.517 [2024-07-24 21:50:02.514305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.517 [2024-07-24 21:50:02.514311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.517 [2024-07-24 21:50:02.514319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.517 [2024-07-24 21:50:02.514330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.517 [2024-07-24 21:50:02.514336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca76b0 is same with the state(5) to be set 00:24:54.517 [2024-07-24 21:50:02.524232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca76b0 (9): Bad file descriptor 00:24:54.517 21:50:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:54.517 21:50:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:54.517 [2024-07-24 21:50:02.534269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:55.460 21:50:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:55.461 21:50:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.461 21:50:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:55.461 21:50:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:55.461 21:50:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.461 21:50:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.461 21:50:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:55.721 [2024-07-24 21:50:03.586065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:55.721 [2024-07-24 21:50:03.586124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca76b0 with addr=10.0.0.2, port=4420 00:24:55.721 [2024-07-24 21:50:03.586142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca76b0 is same with the state(5) to be set 00:24:55.721 [2024-07-24 21:50:03.586177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca76b0 (9): Bad file descriptor 00:24:55.721 [2024-07-24 21:50:03.586595] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:55.721 [2024-07-24 21:50:03.586625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:55.721 [2024-07-24 21:50:03.586635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:55.721 [2024-07-24 21:50:03.586646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:55.721 [2024-07-24 21:50:03.586666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.721 [2024-07-24 21:50:03.586677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:55.721 21:50:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.721 21:50:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:55.721 21:50:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:56.661 [2024-07-24 21:50:04.589158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:56.661 [2024-07-24 21:50:04.589179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:56.661 [2024-07-24 21:50:04.589187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:56.661 [2024-07-24 21:50:04.589194] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:56.661 [2024-07-24 21:50:04.589206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.661 [2024-07-24 21:50:04.589225] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:56.661 [2024-07-24 21:50:04.589251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.661 [2024-07-24 21:50:04.589262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.661 [2024-07-24 21:50:04.589272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.661 [2024-07-24 21:50:04.589279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.661 [2024-07-24 21:50:04.589286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.661 [2024-07-24 21:50:04.589294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.661 [2024-07-24 21:50:04.589301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.661 [2024-07-24 21:50:04.589308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.661 [2024-07-24 21:50:04.589316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.661 [2024-07-24 21:50:04.589324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.661 [2024-07-24 21:50:04.589331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:56.661 [2024-07-24 21:50:04.589427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6a80 (9): Bad file descriptor 00:24:56.661 [2024-07-24 21:50:04.590438] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:56.661 [2024-07-24 21:50:04.590449] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.661 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.921 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:56.921 21:50:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:57.860 21:50:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:57.860 21:50:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:57.860 21:50:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.860 21:50:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:57.860 21:50:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.860 21:50:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:57.860 21:50:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:57.860 21:50:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.860 21:50:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:57.860 21:50:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:58.801 [2024-07-24 21:50:06.644938] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:58.801 [2024-07-24 21:50:06.644954] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:58.801 [2024-07-24 21:50:06.644968] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:58.801 [2024-07-24 21:50:06.774362] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:58.801 21:50:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:58.801 21:50:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.801 21:50:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:58.801 21:50:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.801 21:50:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:58.801 21:50:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.801 21:50:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:58.801 21:50:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.801 [2024-07-24 21:50:06.880157] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:58.801 [2024-07-24 21:50:06.880191] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:58.801 [2024-07-24 21:50:06.880209] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:58.801 [2024-07-24 21:50:06.880222] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:58.801 [2024-07-24 21:50:06.880229] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:58.801 [2024-07-24 21:50:06.884762] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xcae180 was disconnected and freed. delete nvme_qpair. 00:24:58.801 21:50:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:58.801 21:50:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3173914 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3173914 ']' 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3173914 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3173914 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3173914' 00:25:00.184 killing process with pid 3173914 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3173914 00:25:00.184 21:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3173914 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:00.184 rmmod nvme_tcp 00:25:00.184 rmmod nvme_fabrics 00:25:00.184 rmmod nvme_keyring 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3173668 ']' 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3173668 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3173668 ']' 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3173668 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3173668 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3173668' 00:25:00.184 killing process with pid 3173668 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3173668 00:25:00.184 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3173668 00:25:00.444 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:00.444 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:00.444 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:00.444 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:00.444 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:00.444 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.444 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.444 21:50:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:02.987 00:25:02.987 real 0m22.144s 00:25:02.987 user 0m28.863s 00:25:02.987 sys 0m5.332s 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:02.987 ************************************ 00:25:02.987 END TEST nvmf_discovery_remove_ifc 00:25:02.987 ************************************ 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.987 ************************************ 00:25:02.987 START TEST nvmf_identify_kernel_target 00:25:02.987 ************************************ 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:02.987 * Looking for test storage... 00:25:02.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:02.987 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:02.988 21:50:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.270 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:08.271 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:08.271 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:08.271 Found net devices under 0000:86:00.0: cvl_0_0 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:08.271 Found net devices under 0000:86:00.1: cvl_0_1 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:08.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:25:08.271 00:25:08.271 --- 10.0.0.2 ping statistics --- 00:25:08.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.271 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:25:08.271 00:25:08.271 --- 10.0.0.1 ping statistics --- 00:25:08.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.271 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:08.271 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:08.530 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:08.530 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:08.530 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:08.530 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:08.530 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:08.530 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:08.530 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:08.530 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:08.530 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:08.530 21:50:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:11.136 Waiting for block devices as requested 00:25:11.136 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:11.136 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:11.397 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:11.397 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:11.397 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:11.397 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:11.657 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:11.657 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:11.657 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:11.657 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:11.918 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:11.918 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:11.918 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:12.178 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:12.178 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:12.178 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:12.178 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:12.441 No valid GPT data, bailing 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:12.441 00:25:12.441 Discovery Log Number of Records 2, Generation counter 2 00:25:12.441 =====Discovery Log Entry 0====== 00:25:12.441 trtype: tcp 00:25:12.441 adrfam: ipv4 00:25:12.441 subtype: current discovery subsystem 00:25:12.441 treq: not specified, sq flow control disable supported 00:25:12.441 portid: 1 00:25:12.441 trsvcid: 4420 00:25:12.441 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:12.441 traddr: 10.0.0.1 00:25:12.441 eflags: none 00:25:12.441 sectype: none 00:25:12.441 =====Discovery Log Entry 1====== 00:25:12.441 trtype: tcp 00:25:12.441 adrfam: ipv4 00:25:12.441 subtype: nvme subsystem 00:25:12.441 treq: not specified, sq flow control disable supported 00:25:12.441 portid: 1 00:25:12.441 trsvcid: 4420 00:25:12.441 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:12.441 traddr: 10.0.0.1 00:25:12.441 eflags: none 00:25:12.441 sectype: none 00:25:12.441 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:12.441 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:12.441 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.441 ===================================================== 00:25:12.441 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:12.441 ===================================================== 00:25:12.441 Controller Capabilities/Features 00:25:12.441 ================================ 00:25:12.441 Vendor ID: 0000 00:25:12.441 Subsystem Vendor ID: 0000 00:25:12.441 Serial Number: 027bfcf5bced827ab4ec 00:25:12.441 Model Number: Linux 00:25:12.441 Firmware Version: 6.7.0-68 00:25:12.441 Recommended Arb Burst: 0 00:25:12.441 IEEE OUI Identifier: 00 00 00 00:25:12.441 Multi-path I/O 00:25:12.441 May have multiple subsystem ports: No 00:25:12.441 May have multiple controllers: No 00:25:12.441 Associated with SR-IOV VF: No 00:25:12.441 Max Data Transfer Size: Unlimited 00:25:12.441 Max Number of Namespaces: 0 00:25:12.441 Max Number of I/O Queues: 1024 00:25:12.441 NVMe Specification Version (VS): 1.3 00:25:12.441 NVMe Specification Version (Identify): 1.3 00:25:12.441 Maximum Queue Entries: 1024 00:25:12.441 Contiguous Queues Required: No 00:25:12.441 Arbitration Mechanisms Supported 00:25:12.441 Weighted Round Robin: Not Supported 00:25:12.441 Vendor Specific: Not Supported 00:25:12.441 Reset Timeout: 7500 ms 00:25:12.441 Doorbell Stride: 4 bytes 00:25:12.441 NVM Subsystem Reset: Not Supported 00:25:12.441 Command Sets Supported 00:25:12.441 NVM Command Set: Supported 00:25:12.441 Boot Partition: Not Supported 00:25:12.441 Memory Page Size Minimum: 4096 bytes 00:25:12.441 Memory Page Size Maximum: 4096 bytes 00:25:12.441 Persistent Memory Region: Not Supported 00:25:12.441 Optional Asynchronous Events Supported 00:25:12.441 Namespace Attribute Notices: Not Supported 00:25:12.441 Firmware Activation Notices: Not Supported 00:25:12.441 ANA Change Notices: Not Supported 00:25:12.441 PLE Aggregate Log Change Notices: Not Supported 00:25:12.441 LBA Status Info Alert Notices: Not Supported 00:25:12.441 EGE Aggregate Log Change Notices: Not Supported 00:25:12.441 Normal NVM Subsystem Shutdown event: Not Supported 00:25:12.441 Zone Descriptor Change Notices: Not Supported 00:25:12.441 Discovery Log Change Notices: Supported 00:25:12.441 Controller Attributes 00:25:12.441 128-bit Host Identifier: Not Supported 00:25:12.441 Non-Operational Permissive Mode: Not Supported 00:25:12.441 NVM Sets: Not Supported 00:25:12.441 Read Recovery Levels: Not Supported 00:25:12.441 Endurance Groups: Not Supported 00:25:12.441 Predictable Latency Mode: Not Supported 00:25:12.441 Traffic Based Keep ALive: Not Supported 00:25:12.441 Namespace Granularity: Not Supported 00:25:12.441 SQ Associations: Not Supported 00:25:12.441 UUID List: Not Supported 00:25:12.441 Multi-Domain Subsystem: Not Supported 00:25:12.441 Fixed Capacity Management: Not Supported 00:25:12.441 Variable Capacity Management: Not Supported 00:25:12.441 Delete Endurance Group: Not Supported 00:25:12.441 Delete NVM Set: Not Supported 00:25:12.441 Extended LBA Formats Supported: Not Supported 00:25:12.441 Flexible Data Placement Supported: Not Supported 00:25:12.441 00:25:12.441 Controller Memory Buffer Support 00:25:12.441 ================================ 00:25:12.441 Supported: No 00:25:12.441 00:25:12.441 Persistent Memory Region Support 00:25:12.441 ================================ 00:25:12.441 Supported: No 00:25:12.441 00:25:12.441 Admin Command Set Attributes 00:25:12.441 ============================ 00:25:12.441 Security Send/Receive: Not Supported 00:25:12.441 Format NVM: Not Supported 00:25:12.442 Firmware Activate/Download: Not Supported 00:25:12.442 Namespace Management: Not Supported 00:25:12.442 Device Self-Test: Not Supported 00:25:12.442 Directives: Not Supported 00:25:12.442 NVMe-MI: Not Supported 00:25:12.442 Virtualization Management: Not Supported 00:25:12.442 Doorbell Buffer Config: Not Supported 00:25:12.442 Get LBA Status Capability: Not Supported 00:25:12.442 Command & Feature Lockdown Capability: Not Supported 00:25:12.442 Abort Command Limit: 1 00:25:12.442 Async Event Request Limit: 1 00:25:12.442 Number of Firmware Slots: N/A 00:25:12.442 Firmware Slot 1 Read-Only: N/A 00:25:12.442 Firmware Activation Without Reset: N/A 00:25:12.442 Multiple Update Detection Support: N/A 00:25:12.442 Firmware Update Granularity: No Information Provided 00:25:12.442 Per-Namespace SMART Log: No 00:25:12.442 Asymmetric Namespace Access Log Page: Not Supported 00:25:12.442 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:12.442 Command Effects Log Page: Not Supported 00:25:12.442 Get Log Page Extended Data: Supported 00:25:12.442 Telemetry Log Pages: Not Supported 00:25:12.442 Persistent Event Log Pages: Not Supported 00:25:12.442 Supported Log Pages Log Page: May Support 00:25:12.442 Commands Supported & Effects Log Page: Not Supported 00:25:12.442 Feature Identifiers & Effects Log Page:May Support 00:25:12.442 NVMe-MI Commands & Effects Log Page: May Support 00:25:12.442 Data Area 4 for Telemetry Log: Not Supported 00:25:12.442 Error Log Page Entries Supported: 1 00:25:12.442 Keep Alive: Not Supported 00:25:12.442 00:25:12.442 NVM Command Set Attributes 00:25:12.442 ========================== 00:25:12.442 Submission Queue Entry Size 00:25:12.442 Max: 1 00:25:12.442 Min: 1 00:25:12.442 Completion Queue Entry Size 00:25:12.442 Max: 1 00:25:12.442 Min: 1 00:25:12.442 Number of Namespaces: 0 00:25:12.442 Compare Command: Not Supported 00:25:12.442 Write Uncorrectable Command: Not Supported 00:25:12.442 Dataset Management Command: Not Supported 00:25:12.442 Write Zeroes Command: Not Supported 00:25:12.442 Set Features Save Field: Not Supported 00:25:12.442 Reservations: Not Supported 00:25:12.442 Timestamp: Not Supported 00:25:12.442 Copy: Not Supported 00:25:12.442 Volatile Write Cache: Not Present 00:25:12.442 Atomic Write Unit (Normal): 1 00:25:12.442 Atomic Write Unit (PFail): 1 00:25:12.442 Atomic Compare & Write Unit: 1 00:25:12.442 Fused Compare & Write: Not Supported 00:25:12.442 Scatter-Gather List 00:25:12.442 SGL Command Set: Supported 00:25:12.442 SGL Keyed: Not Supported 00:25:12.442 SGL Bit Bucket Descriptor: Not Supported 00:25:12.442 SGL Metadata Pointer: Not Supported 00:25:12.442 Oversized SGL: Not Supported 00:25:12.442 SGL Metadata Address: Not Supported 00:25:12.442 SGL Offset: Supported 00:25:12.442 Transport SGL Data Block: Not Supported 00:25:12.442 Replay Protected Memory Block: Not Supported 00:25:12.442 00:25:12.442 Firmware Slot Information 00:25:12.442 ========================= 00:25:12.442 Active slot: 0 00:25:12.442 00:25:12.442 00:25:12.442 Error Log 00:25:12.442 ========= 00:25:12.442 00:25:12.442 Active Namespaces 00:25:12.442 ================= 00:25:12.442 Discovery Log Page 00:25:12.442 ================== 00:25:12.442 Generation Counter: 2 00:25:12.442 Number of Records: 2 00:25:12.442 Record Format: 0 00:25:12.442 00:25:12.442 Discovery Log Entry 0 00:25:12.442 ---------------------- 00:25:12.442 Transport Type: 3 (TCP) 00:25:12.442 Address Family: 1 (IPv4) 00:25:12.442 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:12.442 Entry Flags: 00:25:12.442 Duplicate Returned Information: 0 00:25:12.442 Explicit Persistent Connection Support for Discovery: 0 00:25:12.442 Transport Requirements: 00:25:12.442 Secure Channel: Not Specified 00:25:12.442 Port ID: 1 (0x0001) 00:25:12.442 Controller ID: 65535 (0xffff) 00:25:12.442 Admin Max SQ Size: 32 00:25:12.442 Transport Service Identifier: 4420 00:25:12.442 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:12.442 Transport Address: 10.0.0.1 00:25:12.442 Discovery Log Entry 1 00:25:12.442 ---------------------- 00:25:12.442 Transport Type: 3 (TCP) 00:25:12.442 Address Family: 1 (IPv4) 00:25:12.442 Subsystem Type: 2 (NVM Subsystem) 00:25:12.442 Entry Flags: 00:25:12.442 Duplicate Returned Information: 0 00:25:12.442 Explicit Persistent Connection Support for Discovery: 0 00:25:12.442 Transport Requirements: 00:25:12.442 Secure Channel: Not Specified 00:25:12.442 Port ID: 1 (0x0001) 00:25:12.442 Controller ID: 65535 (0xffff) 00:25:12.442 Admin Max SQ Size: 32 00:25:12.442 Transport Service Identifier: 4420 00:25:12.442 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:12.442 Transport Address: 10.0.0.1 00:25:12.442 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:12.442 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.442 get_feature(0x01) failed 00:25:12.442 get_feature(0x02) failed 00:25:12.442 get_feature(0x04) failed 00:25:12.442 ===================================================== 00:25:12.442 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:12.442 ===================================================== 00:25:12.442 Controller Capabilities/Features 00:25:12.442 ================================ 00:25:12.442 Vendor ID: 0000 00:25:12.442 Subsystem Vendor ID: 0000 00:25:12.442 Serial Number: c28168908c635c04d04d 00:25:12.442 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:12.442 Firmware Version: 6.7.0-68 00:25:12.442 Recommended Arb Burst: 6 00:25:12.442 IEEE OUI Identifier: 00 00 00 00:25:12.442 Multi-path I/O 00:25:12.442 May have multiple subsystem ports: Yes 00:25:12.442 May have multiple controllers: Yes 00:25:12.442 Associated with SR-IOV VF: No 00:25:12.442 Max Data Transfer Size: Unlimited 00:25:12.442 Max Number of Namespaces: 1024 00:25:12.442 Max Number of I/O Queues: 128 00:25:12.442 NVMe Specification Version (VS): 1.3 00:25:12.442 NVMe Specification Version (Identify): 1.3 00:25:12.442 Maximum Queue Entries: 1024 00:25:12.442 Contiguous Queues Required: No 00:25:12.442 Arbitration Mechanisms Supported 00:25:12.442 Weighted Round Robin: Not Supported 00:25:12.442 Vendor Specific: Not Supported 00:25:12.442 Reset Timeout: 7500 ms 00:25:12.442 Doorbell Stride: 4 bytes 00:25:12.442 NVM Subsystem Reset: Not Supported 00:25:12.442 Command Sets Supported 00:25:12.442 NVM Command Set: Supported 00:25:12.442 Boot Partition: Not Supported 00:25:12.442 Memory Page Size Minimum: 4096 bytes 00:25:12.442 Memory Page Size Maximum: 4096 bytes 00:25:12.442 Persistent Memory Region: Not Supported 00:25:12.442 Optional Asynchronous Events Supported 00:25:12.442 Namespace Attribute Notices: Supported 00:25:12.442 Firmware Activation Notices: Not Supported 00:25:12.442 ANA Change Notices: Supported 00:25:12.442 PLE Aggregate Log Change Notices: Not Supported 00:25:12.442 LBA Status Info Alert Notices: Not Supported 00:25:12.442 EGE Aggregate Log Change Notices: Not Supported 00:25:12.442 Normal NVM Subsystem Shutdown event: Not Supported 00:25:12.442 Zone Descriptor Change Notices: Not Supported 00:25:12.442 Discovery Log Change Notices: Not Supported 00:25:12.442 Controller Attributes 00:25:12.442 128-bit Host Identifier: Supported 00:25:12.442 Non-Operational Permissive Mode: Not Supported 00:25:12.442 NVM Sets: Not Supported 00:25:12.442 Read Recovery Levels: Not Supported 00:25:12.442 Endurance Groups: Not Supported 00:25:12.442 Predictable Latency Mode: Not Supported 00:25:12.442 Traffic Based Keep ALive: Supported 00:25:12.442 Namespace Granularity: Not Supported 00:25:12.442 SQ Associations: Not Supported 00:25:12.442 UUID List: Not Supported 00:25:12.442 Multi-Domain Subsystem: Not Supported 00:25:12.442 Fixed Capacity Management: Not Supported 00:25:12.442 Variable Capacity Management: Not Supported 00:25:12.442 Delete Endurance Group: Not Supported 00:25:12.442 Delete NVM Set: Not Supported 00:25:12.442 Extended LBA Formats Supported: Not Supported 00:25:12.442 Flexible Data Placement Supported: Not Supported 00:25:12.442 00:25:12.442 Controller Memory Buffer Support 00:25:12.442 ================================ 00:25:12.442 Supported: No 00:25:12.442 00:25:12.442 Persistent Memory Region Support 00:25:12.442 ================================ 00:25:12.442 Supported: No 00:25:12.442 00:25:12.442 Admin Command Set Attributes 00:25:12.443 ============================ 00:25:12.443 Security Send/Receive: Not Supported 00:25:12.443 Format NVM: Not Supported 00:25:12.443 Firmware Activate/Download: Not Supported 00:25:12.443 Namespace Management: Not Supported 00:25:12.443 Device Self-Test: Not Supported 00:25:12.443 Directives: Not Supported 00:25:12.443 NVMe-MI: Not Supported 00:25:12.443 Virtualization Management: Not Supported 00:25:12.443 Doorbell Buffer Config: Not Supported 00:25:12.443 Get LBA Status Capability: Not Supported 00:25:12.443 Command & Feature Lockdown Capability: Not Supported 00:25:12.443 Abort Command Limit: 4 00:25:12.443 Async Event Request Limit: 4 00:25:12.443 Number of Firmware Slots: N/A 00:25:12.443 Firmware Slot 1 Read-Only: N/A 00:25:12.443 Firmware Activation Without Reset: N/A 00:25:12.443 Multiple Update Detection Support: N/A 00:25:12.443 Firmware Update Granularity: No Information Provided 00:25:12.443 Per-Namespace SMART Log: Yes 00:25:12.443 Asymmetric Namespace Access Log Page: Supported 00:25:12.443 ANA Transition Time : 10 sec 00:25:12.443 00:25:12.443 Asymmetric Namespace Access Capabilities 00:25:12.443 ANA Optimized State : Supported 00:25:12.443 ANA Non-Optimized State : Supported 00:25:12.443 ANA Inaccessible State : Supported 00:25:12.443 ANA Persistent Loss State : Supported 00:25:12.443 ANA Change State : Supported 00:25:12.443 ANAGRPID is not changed : No 00:25:12.443 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:12.443 00:25:12.443 ANA Group Identifier Maximum : 128 00:25:12.443 Number of ANA Group Identifiers : 128 00:25:12.443 Max Number of Allowed Namespaces : 1024 00:25:12.443 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:12.443 Command Effects Log Page: Supported 00:25:12.443 Get Log Page Extended Data: Supported 00:25:12.443 Telemetry Log Pages: Not Supported 00:25:12.443 Persistent Event Log Pages: Not Supported 00:25:12.443 Supported Log Pages Log Page: May Support 00:25:12.443 Commands Supported & Effects Log Page: Not Supported 00:25:12.443 Feature Identifiers & Effects Log Page:May Support 00:25:12.443 NVMe-MI Commands & Effects Log Page: May Support 00:25:12.443 Data Area 4 for Telemetry Log: Not Supported 00:25:12.443 Error Log Page Entries Supported: 128 00:25:12.443 Keep Alive: Supported 00:25:12.443 Keep Alive Granularity: 1000 ms 00:25:12.443 00:25:12.443 NVM Command Set Attributes 00:25:12.443 ========================== 00:25:12.443 Submission Queue Entry Size 00:25:12.443 Max: 64 00:25:12.443 Min: 64 00:25:12.443 Completion Queue Entry Size 00:25:12.443 Max: 16 00:25:12.443 Min: 16 00:25:12.443 Number of Namespaces: 1024 00:25:12.443 Compare Command: Not Supported 00:25:12.443 Write Uncorrectable Command: Not Supported 00:25:12.443 Dataset Management Command: Supported 00:25:12.443 Write Zeroes Command: Supported 00:25:12.443 Set Features Save Field: Not Supported 00:25:12.443 Reservations: Not Supported 00:25:12.443 Timestamp: Not Supported 00:25:12.443 Copy: Not Supported 00:25:12.443 Volatile Write Cache: Present 00:25:12.443 Atomic Write Unit (Normal): 1 00:25:12.443 Atomic Write Unit (PFail): 1 00:25:12.443 Atomic Compare & Write Unit: 1 00:25:12.443 Fused Compare & Write: Not Supported 00:25:12.443 Scatter-Gather List 00:25:12.443 SGL Command Set: Supported 00:25:12.443 SGL Keyed: Not Supported 00:25:12.443 SGL Bit Bucket Descriptor: Not Supported 00:25:12.443 SGL Metadata Pointer: Not Supported 00:25:12.443 Oversized SGL: Not Supported 00:25:12.443 SGL Metadata Address: Not Supported 00:25:12.443 SGL Offset: Supported 00:25:12.443 Transport SGL Data Block: Not Supported 00:25:12.443 Replay Protected Memory Block: Not Supported 00:25:12.443 00:25:12.443 Firmware Slot Information 00:25:12.443 ========================= 00:25:12.443 Active slot: 0 00:25:12.443 00:25:12.443 Asymmetric Namespace Access 00:25:12.443 =========================== 00:25:12.443 Change Count : 0 00:25:12.443 Number of ANA Group Descriptors : 1 00:25:12.443 ANA Group Descriptor : 0 00:25:12.443 ANA Group ID : 1 00:25:12.443 Number of NSID Values : 1 00:25:12.443 Change Count : 0 00:25:12.443 ANA State : 1 00:25:12.443 Namespace Identifier : 1 00:25:12.443 00:25:12.443 Commands Supported and Effects 00:25:12.443 ============================== 00:25:12.443 Admin Commands 00:25:12.443 -------------- 00:25:12.443 Get Log Page (02h): Supported 00:25:12.443 Identify (06h): Supported 00:25:12.443 Abort (08h): Supported 00:25:12.443 Set Features (09h): Supported 00:25:12.443 Get Features (0Ah): Supported 00:25:12.443 Asynchronous Event Request (0Ch): Supported 00:25:12.443 Keep Alive (18h): Supported 00:25:12.443 I/O Commands 00:25:12.443 ------------ 00:25:12.443 Flush (00h): Supported 00:25:12.443 Write (01h): Supported LBA-Change 00:25:12.443 Read (02h): Supported 00:25:12.443 Write Zeroes (08h): Supported LBA-Change 00:25:12.443 Dataset Management (09h): Supported 00:25:12.443 00:25:12.443 Error Log 00:25:12.443 ========= 00:25:12.443 Entry: 0 00:25:12.443 Error Count: 0x3 00:25:12.443 Submission Queue Id: 0x0 00:25:12.443 Command Id: 0x5 00:25:12.443 Phase Bit: 0 00:25:12.443 Status Code: 0x2 00:25:12.443 Status Code Type: 0x0 00:25:12.443 Do Not Retry: 1 00:25:12.443 Error Location: 0x28 00:25:12.443 LBA: 0x0 00:25:12.443 Namespace: 0x0 00:25:12.443 Vendor Log Page: 0x0 00:25:12.443 ----------- 00:25:12.443 Entry: 1 00:25:12.443 Error Count: 0x2 00:25:12.443 Submission Queue Id: 0x0 00:25:12.443 Command Id: 0x5 00:25:12.443 Phase Bit: 0 00:25:12.443 Status Code: 0x2 00:25:12.443 Status Code Type: 0x0 00:25:12.443 Do Not Retry: 1 00:25:12.443 Error Location: 0x28 00:25:12.443 LBA: 0x0 00:25:12.443 Namespace: 0x0 00:25:12.443 Vendor Log Page: 0x0 00:25:12.443 ----------- 00:25:12.443 Entry: 2 00:25:12.443 Error Count: 0x1 00:25:12.443 Submission Queue Id: 0x0 00:25:12.443 Command Id: 0x4 00:25:12.443 Phase Bit: 0 00:25:12.443 Status Code: 0x2 00:25:12.443 Status Code Type: 0x0 00:25:12.443 Do Not Retry: 1 00:25:12.443 Error Location: 0x28 00:25:12.443 LBA: 0x0 00:25:12.443 Namespace: 0x0 00:25:12.443 Vendor Log Page: 0x0 00:25:12.443 00:25:12.443 Number of Queues 00:25:12.443 ================ 00:25:12.443 Number of I/O Submission Queues: 128 00:25:12.443 Number of I/O Completion Queues: 128 00:25:12.443 00:25:12.443 ZNS Specific Controller Data 00:25:12.443 ============================ 00:25:12.443 Zone Append Size Limit: 0 00:25:12.443 00:25:12.443 00:25:12.443 Active Namespaces 00:25:12.443 ================= 00:25:12.443 get_feature(0x05) failed 00:25:12.443 Namespace ID:1 00:25:12.443 Command Set Identifier: NVM (00h) 00:25:12.443 Deallocate: Supported 00:25:12.443 Deallocated/Unwritten Error: Not Supported 00:25:12.443 Deallocated Read Value: Unknown 00:25:12.443 Deallocate in Write Zeroes: Not Supported 00:25:12.443 Deallocated Guard Field: 0xFFFF 00:25:12.443 Flush: Supported 00:25:12.443 Reservation: Not Supported 00:25:12.443 Namespace Sharing Capabilities: Multiple Controllers 00:25:12.443 Size (in LBAs): 1953525168 (931GiB) 00:25:12.443 Capacity (in LBAs): 1953525168 (931GiB) 00:25:12.443 Utilization (in LBAs): 1953525168 (931GiB) 00:25:12.443 UUID: b7d18952-10c1-4ea4-9d56-f9ff2caeae78 00:25:12.443 Thin Provisioning: Not Supported 00:25:12.443 Per-NS Atomic Units: Yes 00:25:12.443 Atomic Boundary Size (Normal): 0 00:25:12.443 Atomic Boundary Size (PFail): 0 00:25:12.443 Atomic Boundary Offset: 0 00:25:12.443 NGUID/EUI64 Never Reused: No 00:25:12.443 ANA group ID: 1 00:25:12.443 Namespace Write Protected: No 00:25:12.443 Number of LBA Formats: 1 00:25:12.443 Current LBA Format: LBA Format #00 00:25:12.443 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:12.443 00:25:12.443 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:12.443 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:12.443 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:12.443 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:12.443 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:12.443 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:12.443 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:12.443 rmmod nvme_tcp 00:25:12.443 rmmod nvme_fabrics 00:25:12.703 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:12.703 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:12.703 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:12.703 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:12.703 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:12.703 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:12.703 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:12.703 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:12.703 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:12.703 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.703 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.703 21:50:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.613 21:50:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:14.613 21:50:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:14.613 21:50:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:14.613 21:50:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:14.613 21:50:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:14.613 21:50:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:14.613 21:50:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:14.613 21:50:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:14.613 21:50:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:14.613 21:50:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:14.613 21:50:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:17.912 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:17.912 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:18.483 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:18.483 00:25:18.483 real 0m15.778s 00:25:18.483 user 0m3.934s 00:25:18.483 sys 0m8.236s 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:18.483 ************************************ 00:25:18.483 END TEST nvmf_identify_kernel_target 00:25:18.483 ************************************ 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.483 ************************************ 00:25:18.483 START TEST nvmf_auth_host 00:25:18.483 ************************************ 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:18.483 * Looking for test storage... 00:25:18.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.483 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:18.484 21:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:23.770 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:23.770 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:23.770 Found net devices under 0000:86:00.0: cvl_0_0 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:23.770 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:23.771 Found net devices under 0000:86:00.1: cvl_0_1 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.771 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:24.033 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:24.033 21:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.033 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:24.033 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.033 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:24.033 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:24.033 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:24.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:25:24.033 00:25:24.033 --- 10.0.0.2 ping statistics --- 00:25:24.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.033 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:25:24.033 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:24.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.493 ms 00:25:24.033 00:25:24.033 --- 10.0.0.1 ping statistics --- 00:25:24.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.033 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:25:24.033 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.033 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:24.033 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:24.033 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.293 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:24.293 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:24.293 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3185763 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3185763 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3185763 ']' 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:24.294 21:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=21f755f438321c63089bd485a48238cb 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.p5H 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 21f755f438321c63089bd485a48238cb 0 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 21f755f438321c63089bd485a48238cb 0 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=21f755f438321c63089bd485a48238cb 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.p5H 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.p5H 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.p5H 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3095a85a0bd2476323ccbc6ac8597bbfd6ddad0c4ee1f45b6b77cdd353b4b4a9 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Gw2 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3095a85a0bd2476323ccbc6ac8597bbfd6ddad0c4ee1f45b6b77cdd353b4b4a9 3 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3095a85a0bd2476323ccbc6ac8597bbfd6ddad0c4ee1f45b6b77cdd353b4b4a9 3 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:25.234 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3095a85a0bd2476323ccbc6ac8597bbfd6ddad0c4ee1f45b6b77cdd353b4b4a9 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Gw2 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Gw2 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Gw2 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=736a00772d28b585b4a25907e259638c9bb3003563bcf973 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.6ZG 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 736a00772d28b585b4a25907e259638c9bb3003563bcf973 0 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 736a00772d28b585b4a25907e259638c9bb3003563bcf973 0 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=736a00772d28b585b4a25907e259638c9bb3003563bcf973 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.6ZG 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.6ZG 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.6ZG 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1aab9bfbcf6016616036e9effb9f252a4f6712ceef7dd103 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zD8 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1aab9bfbcf6016616036e9effb9f252a4f6712ceef7dd103 2 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1aab9bfbcf6016616036e9effb9f252a4f6712ceef7dd103 2 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1aab9bfbcf6016616036e9effb9f252a4f6712ceef7dd103 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zD8 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zD8 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zD8 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4aeddd7b7565d63279fabf3c6376e46a 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Kps 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4aeddd7b7565d63279fabf3c6376e46a 1 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4aeddd7b7565d63279fabf3c6376e46a 1 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4aeddd7b7565d63279fabf3c6376e46a 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:25.235 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Kps 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Kps 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Kps 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c5d1ecc8544f057d2b49bc459d5268f9 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.OV4 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c5d1ecc8544f057d2b49bc459d5268f9 1 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c5d1ecc8544f057d2b49bc459d5268f9 1 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c5d1ecc8544f057d2b49bc459d5268f9 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.OV4 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.OV4 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.OV4 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e50f18a69c77fcbf4896f1a3f4dfc7e49d4f8361412abfc8 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.13s 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e50f18a69c77fcbf4896f1a3f4dfc7e49d4f8361412abfc8 2 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e50f18a69c77fcbf4896f1a3f4dfc7e49d4f8361412abfc8 2 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e50f18a69c77fcbf4896f1a3f4dfc7e49d4f8361412abfc8 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.13s 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.13s 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.13s 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:25.495 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8cd70a3c8c8498fc6fe30c5f57144108 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.6i9 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8cd70a3c8c8498fc6fe30c5f57144108 0 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8cd70a3c8c8498fc6fe30c5f57144108 0 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8cd70a3c8c8498fc6fe30c5f57144108 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.6i9 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.6i9 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.6i9 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4a59a2dc0e2cd25e320452db640c798cdf34e72da7e5b3ab65e5c5b5bede2b9f 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.5vE 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4a59a2dc0e2cd25e320452db640c798cdf34e72da7e5b3ab65e5c5b5bede2b9f 3 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4a59a2dc0e2cd25e320452db640c798cdf34e72da7e5b3ab65e5c5b5bede2b9f 3 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4a59a2dc0e2cd25e320452db640c798cdf34e72da7e5b3ab65e5c5b5bede2b9f 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.5vE 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.5vE 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.5vE 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3185763 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3185763 ']' 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:25.496 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.p5H 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Gw2 ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gw2 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.6ZG 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zD8 ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zD8 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Kps 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.OV4 ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OV4 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.13s 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.6i9 ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.6i9 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.5vE 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:25.756 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:25.757 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:25.757 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:25.757 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:25.757 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:25.757 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:25.757 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:26.016 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:26.016 21:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:28.557 Waiting for block devices as requested 00:25:28.557 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:28.557 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:28.557 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:28.816 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:28.816 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:28.816 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:28.816 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:29.076 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:29.076 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:29.076 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:29.076 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:29.335 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:29.335 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:29.335 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:29.595 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:29.595 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:29.595 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:30.165 No valid GPT data, bailing 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:30.165 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:30.426 00:25:30.426 Discovery Log Number of Records 2, Generation counter 2 00:25:30.426 =====Discovery Log Entry 0====== 00:25:30.426 trtype: tcp 00:25:30.426 adrfam: ipv4 00:25:30.426 subtype: current discovery subsystem 00:25:30.426 treq: not specified, sq flow control disable supported 00:25:30.426 portid: 1 00:25:30.426 trsvcid: 4420 00:25:30.426 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:30.426 traddr: 10.0.0.1 00:25:30.426 eflags: none 00:25:30.426 sectype: none 00:25:30.426 =====Discovery Log Entry 1====== 00:25:30.426 trtype: tcp 00:25:30.426 adrfam: ipv4 00:25:30.426 subtype: nvme subsystem 00:25:30.426 treq: not specified, sq flow control disable supported 00:25:30.426 portid: 1 00:25:30.426 trsvcid: 4420 00:25:30.426 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:30.426 traddr: 10.0.0.1 00:25:30.426 eflags: none 00:25:30.426 sectype: none 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.426 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.427 nvme0n1 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.427 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.691 nvme0n1 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.691 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.692 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.008 nvme0n1 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.008 21:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.008 nvme0n1 00:25:31.008 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.008 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.008 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.008 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.008 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.008 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.268 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.268 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.268 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.268 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.268 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.268 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.268 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:31.268 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.268 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.268 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.269 nvme0n1 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.269 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.530 nvme0n1 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.530 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.531 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.531 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.531 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.531 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.531 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.791 nvme0n1 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.791 21:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.052 nvme0n1 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.052 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.313 nvme0n1 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.313 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.314 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.575 nvme0n1 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.575 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.836 nvme0n1 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.836 21:50:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.096 nvme0n1 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.096 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.097 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.097 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.097 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.097 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.097 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.097 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.097 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.097 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.097 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.097 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.357 nvme0n1 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.357 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.618 nvme0n1 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.618 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.876 nvme0n1 00:25:33.876 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.876 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.876 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.876 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.876 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.876 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.135 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.135 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.135 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.135 21:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.135 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.394 nvme0n1 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.394 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.653 nvme0n1 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.653 21:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.222 nvme0n1 00:25:35.222 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.222 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.222 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.222 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.222 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.222 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.222 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.222 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.223 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.486 nvme0n1 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.486 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.746 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.006 nvme0n1 00:25:36.006 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.006 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.006 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.006 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.006 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.006 21:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.006 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.575 nvme0n1 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.575 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.576 21:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.146 nvme0n1 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.146 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.716 nvme0n1 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.716 21:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.285 nvme0n1 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.285 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.545 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.545 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.545 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.545 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.545 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.545 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.545 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.545 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.545 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.545 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.545 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.545 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.545 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.114 nvme0n1 00:25:39.114 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.114 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.114 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.114 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.114 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.114 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.114 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.114 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.114 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.114 21:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.114 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.683 nvme0n1 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.683 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.684 nvme0n1 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.684 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.944 nvme0n1 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.944 21:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.944 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.205 nvme0n1 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.205 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.206 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.206 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.467 nvme0n1 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.467 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.728 nvme0n1 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.728 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.729 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.729 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.729 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.729 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.729 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.729 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.729 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.729 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.729 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.729 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:40.729 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.729 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.729 nvme0n1 00:25:40.729 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.989 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.989 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.989 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.989 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.989 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.989 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.989 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.989 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.989 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.990 21:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.990 nvme0n1 00:25:40.990 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.990 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.990 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.990 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.990 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.990 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.251 nvme0n1 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.251 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.512 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.513 nvme0n1 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.513 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.773 nvme0n1 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.773 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.774 21:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.034 nvme0n1 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:42.034 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.035 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.295 nvme0n1 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.295 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.296 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.556 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.817 nvme0n1 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.817 21:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.077 nvme0n1 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.077 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.078 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.078 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.078 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.078 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.078 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.078 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.078 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.078 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.078 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.338 nvme0n1 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.338 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.339 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.339 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.339 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.339 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.339 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.339 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.339 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.339 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.339 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.339 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.339 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.909 nvme0n1 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.909 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.910 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.910 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.910 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.910 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.910 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.910 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.910 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.910 21:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.171 nvme0n1 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:44.171 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.172 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.782 nvme0n1 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.782 21:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.043 nvme0n1 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.043 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.610 nvme0n1 00:25:45.610 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.610 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.610 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.610 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.610 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.610 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.610 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.610 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.610 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.610 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.610 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.610 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.610 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.610 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.611 21:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.178 nvme0n1 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.178 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.179 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.179 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.179 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.179 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.179 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.179 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.179 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.179 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.179 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.748 nvme0n1 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.748 21:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.315 nvme0n1 00:25:47.315 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.315 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.315 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.315 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.315 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.315 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.574 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.575 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.575 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.575 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.575 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.575 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.575 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.575 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.575 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:47.575 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.575 21:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.143 nvme0n1 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.143 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.713 nvme0n1 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.713 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.973 nvme0n1 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.973 21:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.232 nvme0n1 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.232 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.233 nvme0n1 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.233 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.493 nvme0n1 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:49.493 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.494 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.753 nvme0n1 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:49.753 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.754 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.013 nvme0n1 00:25:50.013 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.013 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.013 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.013 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.013 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.013 21:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.013 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.013 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.013 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.013 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.013 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.013 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.013 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:50.013 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.013 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.013 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:50.013 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.014 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.273 nvme0n1 00:25:50.273 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.273 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.273 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.273 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.273 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.273 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.273 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.273 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.273 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.273 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.273 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.273 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.273 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.274 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.533 nvme0n1 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.534 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.794 nvme0n1 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:50.794 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.795 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.055 nvme0n1 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.055 21:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.314 nvme0n1 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.314 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.315 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.315 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.315 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.315 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.315 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.315 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.315 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.315 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.315 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:51.315 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.315 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.574 nvme0n1 00:25:51.574 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.574 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.574 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.574 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.575 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.836 nvme0n1 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.836 21:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.097 nvme0n1 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.097 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.358 nvme0n1 00:25:52.358 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.358 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.358 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.358 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.358 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.358 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.618 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.618 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.618 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.618 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.618 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.618 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:52.618 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.618 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.619 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.879 nvme0n1 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.880 21:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.450 nvme0n1 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:53.450 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.451 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.712 nvme0n1 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.712 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.972 21:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.232 nvme0n1 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.232 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.802 nvme0n1 00:25:54.802 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.802 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.802 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.802 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.802 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.802 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.802 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.802 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFmNzU1ZjQzODMyMWM2MzA4OWJkNDg1YTQ4MjM4Y2K5h/NK: 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: ]] 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA5NWE4NWEwYmQyNDc2MzIzY2NiYzZhYzg1OTdiYmZkNmRkYWQwYzRlZTFmNDViNmI3N2NkZDM1M2I0YjRhOfG1PEI=: 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.803 21:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.385 nvme0n1 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:55.385 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.386 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.976 nvme0n1 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGFlZGRkN2I3NTY1ZDYzMjc5ZmFiZjNjNjM3NmU0NmH2nV2j: 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: ]] 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzVkMWVjYzg1NDRmMDU3ZDJiNDliYzQ1OWQ1MjY4ZjnAk/NK: 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.976 21:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.547 nvme0n1 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUwZjE4YTY5Yzc3ZmNiZjQ4OTZmMWEzZjRkZmM3ZTQ5ZDRmODM2MTQxMmFiZmM4wYUBxw==: 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: ]] 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNkNzBhM2M4Yzg0OThmYzZmZTMwYzVmNTcxNDQxMDhRDt7B: 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.547 21:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.117 nvme0n1 00:25:57.117 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.117 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.117 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.117 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.117 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.117 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.117 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.117 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGE1OWEyZGMwZTJjZDI1ZTMyMDQ1MmRiNjQwYzc5OGNkZjM0ZTcyZGE3ZTViM2FiNjVlNWM1YjViZWRlMmI5Zkqz/Os=: 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.377 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.948 nvme0n1 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2YTAwNzcyZDI4YjU4NWI0YTI1OTA3ZTI1OTYzOGM5YmIzMDAzNTYzYmNmOTczbqmysg==: 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWFhYjliZmJjZjYwMTY2MTYwMzZlOWVmZmI5ZjI1MmE0ZjY3MTJjZWVmN2RkMTAz9cQUVA==: 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.948 request: 00:25:57.948 { 00:25:57.948 "name": "nvme0", 00:25:57.948 "trtype": "tcp", 00:25:57.948 "traddr": "10.0.0.1", 00:25:57.948 "adrfam": "ipv4", 00:25:57.948 "trsvcid": "4420", 00:25:57.948 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:57.948 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:57.948 "prchk_reftag": false, 00:25:57.948 "prchk_guard": false, 00:25:57.948 "hdgst": false, 00:25:57.948 "ddgst": false, 00:25:57.948 "method": "bdev_nvme_attach_controller", 00:25:57.948 "req_id": 1 00:25:57.948 } 00:25:57.948 Got JSON-RPC error response 00:25:57.948 response: 00:25:57.948 { 00:25:57.948 "code": -5, 00:25:57.948 "message": "Input/output error" 00:25:57.948 } 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:57.948 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:57.949 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:57.949 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:57.949 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:57.949 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:57.949 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.949 21:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.949 request: 00:25:57.949 { 00:25:57.949 "name": "nvme0", 00:25:57.949 "trtype": "tcp", 00:25:57.949 "traddr": "10.0.0.1", 00:25:57.949 "adrfam": "ipv4", 00:25:57.949 "trsvcid": "4420", 00:25:57.949 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:57.949 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:57.949 "prchk_reftag": false, 00:25:57.949 "prchk_guard": false, 00:25:57.949 "hdgst": false, 00:25:57.949 "ddgst": false, 00:25:57.949 "dhchap_key": "key2", 00:25:57.949 "method": "bdev_nvme_attach_controller", 00:25:57.949 "req_id": 1 00:25:57.949 } 00:25:57.949 Got JSON-RPC error response 00:25:57.949 response: 00:25:57.949 { 00:25:57.949 "code": -5, 00:25:57.949 "message": "Input/output error" 00:25:57.949 } 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.949 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.209 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.209 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.209 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.209 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.209 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.210 request: 00:25:58.210 { 00:25:58.210 "name": "nvme0", 00:25:58.210 "trtype": "tcp", 00:25:58.210 "traddr": "10.0.0.1", 00:25:58.210 "adrfam": "ipv4", 00:25:58.210 "trsvcid": "4420", 00:25:58.210 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:58.210 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:58.210 "prchk_reftag": false, 00:25:58.210 "prchk_guard": false, 00:25:58.210 "hdgst": false, 00:25:58.210 "ddgst": false, 00:25:58.210 "dhchap_key": "key1", 00:25:58.210 "dhchap_ctrlr_key": "ckey2", 00:25:58.210 "method": "bdev_nvme_attach_controller", 00:25:58.210 "req_id": 1 00:25:58.210 } 00:25:58.210 Got JSON-RPC error response 00:25:58.210 response: 00:25:58.210 { 00:25:58.210 "code": -5, 00:25:58.210 "message": "Input/output error" 00:25:58.210 } 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:58.210 rmmod nvme_tcp 00:25:58.210 rmmod nvme_fabrics 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3185763 ']' 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3185763 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3185763 ']' 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3185763 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3185763 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3185763' 00:25:58.210 killing process with pid 3185763 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3185763 00:25:58.210 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3185763 00:25:58.470 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:58.470 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:58.470 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:58.470 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:58.470 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:58.470 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.470 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.470 21:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.433 21:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:00.433 21:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:00.433 21:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:00.433 21:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:00.433 21:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:00.433 21:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:00.433 21:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:00.433 21:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:00.433 21:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:00.433 21:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:00.433 21:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:00.433 21:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:00.433 21:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:02.976 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:02.976 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:03.917 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:03.917 21:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.p5H /tmp/spdk.key-null.6ZG /tmp/spdk.key-sha256.Kps /tmp/spdk.key-sha384.13s /tmp/spdk.key-sha512.5vE /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:03.917 21:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:06.457 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:06.457 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:06.457 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:06.457 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:06.457 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:06.457 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:06.457 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:06.457 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:06.716 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:06.716 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:06.716 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:06.716 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:06.716 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:06.716 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:06.716 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:06.716 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:06.716 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:06.716 00:26:06.716 real 0m48.224s 00:26:06.716 user 0m43.305s 00:26:06.716 sys 0m11.737s 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.716 ************************************ 00:26:06.716 END TEST nvmf_auth_host 00:26:06.716 ************************************ 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.716 ************************************ 00:26:06.716 START TEST nvmf_digest 00:26:06.716 ************************************ 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:06.716 * Looking for test storage... 00:26:06.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.716 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.976 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:06.977 21:51:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:12.258 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.258 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:12.259 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:12.259 Found net devices under 0000:86:00.0: cvl_0_0 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:12.259 Found net devices under 0000:86:00.1: cvl_0_1 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:12.259 21:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:12.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:26:12.259 00:26:12.259 --- 10.0.0.2 ping statistics --- 00:26:12.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.259 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:26:12.259 00:26:12.259 --- 10.0.0.1 ping statistics --- 00:26:12.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.259 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:12.259 ************************************ 00:26:12.259 START TEST nvmf_digest_clean 00:26:12.259 ************************************ 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3199230 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3199230 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3199230 ']' 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:12.259 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:12.260 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:12.260 [2024-07-24 21:51:20.195389] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:26:12.260 [2024-07-24 21:51:20.195432] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.260 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.260 [2024-07-24 21:51:20.252981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.260 [2024-07-24 21:51:20.333272] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.260 [2024-07-24 21:51:20.333308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.260 [2024-07-24 21:51:20.333315] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.260 [2024-07-24 21:51:20.333324] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.260 [2024-07-24 21:51:20.333329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.260 [2024-07-24 21:51:20.333345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.199 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:13.199 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:13.199 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:13.199 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:13.199 21:51:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.199 null0 00:26:13.199 [2024-07-24 21:51:21.108365] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.199 [2024-07-24 21:51:21.132533] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3199477 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3199477 /var/tmp/bperf.sock 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3199477 ']' 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.199 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:13.199 [2024-07-24 21:51:21.181916] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:26:13.199 [2024-07-24 21:51:21.181959] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199477 ] 00:26:13.199 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.199 [2024-07-24 21:51:21.234983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.199 [2024-07-24 21:51:21.314428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.141 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.141 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:14.141 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:14.141 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:14.141 21:51:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:14.141 21:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.141 21:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.712 nvme0n1 00:26:14.712 21:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:14.712 21:51:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.712 Running I/O for 2 seconds... 00:26:16.619 00:26:16.619 Latency(us) 00:26:16.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.619 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:16.619 nvme0n1 : 2.00 26424.10 103.22 0.00 0.00 4838.39 2236.77 17438.27 00:26:16.619 =================================================================================================================== 00:26:16.619 Total : 26424.10 103.22 0.00 0.00 4838.39 2236.77 17438.27 00:26:16.619 0 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:16.880 | select(.opcode=="crc32c") 00:26:16.880 | "\(.module_name) \(.executed)"' 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3199477 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3199477 ']' 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3199477 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3199477 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3199477' 00:26:16.880 killing process with pid 3199477 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3199477 00:26:16.880 Received shutdown signal, test time was about 2.000000 seconds 00:26:16.880 00:26:16.880 Latency(us) 00:26:16.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.880 =================================================================================================================== 00:26:16.880 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.880 21:51:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3199477 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3200171 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3200171 /var/tmp/bperf.sock 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3200171 ']' 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:17.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:17.140 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:17.140 [2024-07-24 21:51:25.170483] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:26:17.140 [2024-07-24 21:51:25.170531] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200171 ] 00:26:17.140 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.140 Zero copy mechanism will not be used. 00:26:17.140 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.140 [2024-07-24 21:51:25.222932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.400 [2024-07-24 21:51:25.302100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.970 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:17.970 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:17.970 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:17.970 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:17.970 21:51:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:18.230 21:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.230 21:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.491 nvme0n1 00:26:18.491 21:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:18.491 21:51:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:18.491 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.491 Zero copy mechanism will not be used. 00:26:18.491 Running I/O for 2 seconds... 00:26:20.469 00:26:20.469 Latency(us) 00:26:20.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.469 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:20.469 nvme0n1 : 2.00 2254.48 281.81 0.00 0.00 7094.14 6382.64 24504.77 00:26:20.469 =================================================================================================================== 00:26:20.469 Total : 2254.48 281.81 0.00 0.00 7094.14 6382.64 24504.77 00:26:20.469 0 00:26:20.469 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:20.469 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:20.469 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:20.469 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:20.469 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:20.469 | select(.opcode=="crc32c") 00:26:20.469 | "\(.module_name) \(.executed)"' 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3200171 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3200171 ']' 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3200171 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3200171 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3200171' 00:26:20.729 killing process with pid 3200171 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3200171 00:26:20.729 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.729 00:26:20.729 Latency(us) 00:26:20.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.729 =================================================================================================================== 00:26:20.729 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.729 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3200171 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3200669 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3200669 /var/tmp/bperf.sock 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3200669 ']' 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:20.990 21:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:20.990 [2024-07-24 21:51:29.021857] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:26:20.990 [2024-07-24 21:51:29.021908] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200669 ] 00:26:20.990 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.990 [2024-07-24 21:51:29.077568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.250 [2024-07-24 21:51:29.151775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.820 21:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:21.820 21:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:21.820 21:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:21.820 21:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:21.820 21:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:22.081 21:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.081 21:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.340 nvme0n1 00:26:22.340 21:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:22.340 21:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:22.340 Running I/O for 2 seconds... 00:26:24.879 00:26:24.879 Latency(us) 00:26:24.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.879 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:24.879 nvme0n1 : 2.00 26947.51 105.26 0.00 0.00 4744.29 2507.46 30089.57 00:26:24.879 =================================================================================================================== 00:26:24.879 Total : 26947.51 105.26 0.00 0.00 4744.29 2507.46 30089.57 00:26:24.879 0 00:26:24.879 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:24.879 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:24.879 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:24.879 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:24.879 | select(.opcode=="crc32c") 00:26:24.879 | "\(.module_name) \(.executed)"' 00:26:24.879 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:24.879 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:24.879 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:24.879 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:24.879 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:24.879 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3200669 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3200669 ']' 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3200669 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3200669 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3200669' 00:26:24.880 killing process with pid 3200669 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3200669 00:26:24.880 Received shutdown signal, test time was about 2.000000 seconds 00:26:24.880 00:26:24.880 Latency(us) 00:26:24.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.880 =================================================================================================================== 00:26:24.880 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3200669 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3201343 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3201343 /var/tmp/bperf.sock 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3201343 ']' 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:24.880 21:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:24.880 [2024-07-24 21:51:32.848849] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:26:24.880 [2024-07-24 21:51:32.848899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201343 ] 00:26:24.880 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:24.880 Zero copy mechanism will not be used. 00:26:24.880 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.880 [2024-07-24 21:51:32.902610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.880 [2024-07-24 21:51:32.971667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.817 21:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:25.817 21:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:25.818 21:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:25.818 21:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:25.818 21:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:25.818 21:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.818 21:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.077 nvme0n1 00:26:26.077 21:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:26.077 21:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:26.337 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:26.337 Zero copy mechanism will not be used. 00:26:26.337 Running I/O for 2 seconds... 00:26:28.244 00:26:28.244 Latency(us) 00:26:28.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.244 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:28.244 nvme0n1 : 2.01 1496.65 187.08 0.00 0.00 10665.31 8092.27 38067.87 00:26:28.244 =================================================================================================================== 00:26:28.244 Total : 1496.65 187.08 0.00 0.00 10665.31 8092.27 38067.87 00:26:28.244 0 00:26:28.244 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:28.244 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:28.244 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:28.244 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:28.244 | select(.opcode=="crc32c") 00:26:28.244 | "\(.module_name) \(.executed)"' 00:26:28.244 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3201343 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3201343 ']' 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3201343 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3201343 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3201343' 00:26:28.504 killing process with pid 3201343 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3201343 00:26:28.504 Received shutdown signal, test time was about 2.000000 seconds 00:26:28.504 00:26:28.504 Latency(us) 00:26:28.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.504 =================================================================================================================== 00:26:28.504 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.504 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3201343 00:26:28.764 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3199230 00:26:28.764 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3199230 ']' 00:26:28.764 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3199230 00:26:28.764 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:28.764 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.764 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3199230 00:26:28.764 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:28.764 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:28.764 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3199230' 00:26:28.764 killing process with pid 3199230 00:26:28.764 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3199230 00:26:28.764 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3199230 00:26:29.024 00:26:29.024 real 0m16.774s 00:26:29.024 user 0m33.319s 00:26:29.024 sys 0m3.304s 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:29.024 ************************************ 00:26:29.024 END TEST nvmf_digest_clean 00:26:29.024 ************************************ 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:29.024 ************************************ 00:26:29.024 START TEST nvmf_digest_error 00:26:29.024 ************************************ 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3202073 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3202073 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3202073 ']' 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.024 21:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:29.024 [2024-07-24 21:51:37.036057] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:26:29.024 [2024-07-24 21:51:37.036099] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.024 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.024 [2024-07-24 21:51:37.092060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.283 [2024-07-24 21:51:37.171997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.283 [2024-07-24 21:51:37.172029] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.283 [2024-07-24 21:51:37.172036] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.283 [2024-07-24 21:51:37.172047] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.283 [2024-07-24 21:51:37.172053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.283 [2024-07-24 21:51:37.172070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.852 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:29.852 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:29.852 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:29.852 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:29.852 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.852 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.852 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:29.852 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.852 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.852 [2024-07-24 21:51:37.858055] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:29.852 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.852 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:29.852 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:29.852 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.852 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.852 null0 00:26:29.852 [2024-07-24 21:51:37.947962] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.112 [2024-07-24 21:51:37.972129] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3202318 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3202318 /var/tmp/bperf.sock 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3202318 ']' 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:30.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:30.112 21:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:30.112 [2024-07-24 21:51:38.006894] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:26:30.112 [2024-07-24 21:51:38.006933] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3202318 ] 00:26:30.112 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.112 [2024-07-24 21:51:38.060576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.112 [2024-07-24 21:51:38.138087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.051 21:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:31.051 21:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:31.051 21:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:31.051 21:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:31.051 21:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:31.051 21:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.051 21:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.051 21:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.051 21:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.051 21:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.310 nvme0n1 00:26:31.310 21:51:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:31.310 21:51:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.310 21:51:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.310 21:51:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.310 21:51:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:31.310 21:51:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:31.570 Running I/O for 2 seconds... 00:26:31.570 [2024-07-24 21:51:39.509700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.570 [2024-07-24 21:51:39.509734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.570 [2024-07-24 21:51:39.509745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.570 [2024-07-24 21:51:39.522179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.570 [2024-07-24 21:51:39.522204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.570 [2024-07-24 21:51:39.522213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.570 [2024-07-24 21:51:39.532033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.570 [2024-07-24 21:51:39.532059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.532069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.571 [2024-07-24 21:51:39.541503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.571 [2024-07-24 21:51:39.541524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.541532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.571 [2024-07-24 21:51:39.552121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.571 [2024-07-24 21:51:39.552141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.552150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.571 [2024-07-24 21:51:39.561733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.571 [2024-07-24 21:51:39.561753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.561761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.571 [2024-07-24 21:51:39.575822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.571 [2024-07-24 21:51:39.575843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.575852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.571 [2024-07-24 21:51:39.585862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.571 [2024-07-24 21:51:39.585883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.585898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.571 [2024-07-24 21:51:39.594497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.571 [2024-07-24 21:51:39.594517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.594526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.571 [2024-07-24 21:51:39.604047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.571 [2024-07-24 21:51:39.604068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.604077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.571 [2024-07-24 21:51:39.613995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.571 [2024-07-24 21:51:39.614016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.614024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.571 [2024-07-24 21:51:39.624254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.571 [2024-07-24 21:51:39.624275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.624283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.571 [2024-07-24 21:51:39.637009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.571 [2024-07-24 21:51:39.637030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.637038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.571 [2024-07-24 21:51:39.649537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.571 [2024-07-24 21:51:39.649558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.649566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.571 [2024-07-24 21:51:39.658800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.571 [2024-07-24 21:51:39.658821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.658829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.571 [2024-07-24 21:51:39.670117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.571 [2024-07-24 21:51:39.670138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.670146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.571 [2024-07-24 21:51:39.680162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.571 [2024-07-24 21:51:39.680185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.571 [2024-07-24 21:51:39.680193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.831 [2024-07-24 21:51:39.691370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.831 [2024-07-24 21:51:39.691391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.831 [2024-07-24 21:51:39.691400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.831 [2024-07-24 21:51:39.703564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.831 [2024-07-24 21:51:39.703585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.831 [2024-07-24 21:51:39.703593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.831 [2024-07-24 21:51:39.712322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.831 [2024-07-24 21:51:39.712342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.831 [2024-07-24 21:51:39.712351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.831 [2024-07-24 21:51:39.725283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.831 [2024-07-24 21:51:39.725303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.831 [2024-07-24 21:51:39.725311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.831 [2024-07-24 21:51:39.733673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.831 [2024-07-24 21:51:39.733693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.831 [2024-07-24 21:51:39.733701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.831 [2024-07-24 21:51:39.746390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.831 [2024-07-24 21:51:39.746410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.831 [2024-07-24 21:51:39.746418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.831 [2024-07-24 21:51:39.754940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.831 [2024-07-24 21:51:39.754960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.831 [2024-07-24 21:51:39.754968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.831 [2024-07-24 21:51:39.764422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.831 [2024-07-24 21:51:39.764443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.831 [2024-07-24 21:51:39.764451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.831 [2024-07-24 21:51:39.774748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.831 [2024-07-24 21:51:39.774768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.831 [2024-07-24 21:51:39.774776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.831 [2024-07-24 21:51:39.786325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.831 [2024-07-24 21:51:39.786345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.831 [2024-07-24 21:51:39.786353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.831 [2024-07-24 21:51:39.797594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.831 [2024-07-24 21:51:39.797615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.797624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.832 [2024-07-24 21:51:39.806407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.832 [2024-07-24 21:51:39.806427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.806435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.832 [2024-07-24 21:51:39.815570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.832 [2024-07-24 21:51:39.815590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.815598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.832 [2024-07-24 21:51:39.825580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.832 [2024-07-24 21:51:39.825600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.825608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.832 [2024-07-24 21:51:39.834647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.832 [2024-07-24 21:51:39.834668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.834676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.832 [2024-07-24 21:51:39.847644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.832 [2024-07-24 21:51:39.847664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.847673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.832 [2024-07-24 21:51:39.859507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.832 [2024-07-24 21:51:39.859527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.859538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.832 [2024-07-24 21:51:39.868193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.832 [2024-07-24 21:51:39.868213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.868222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.832 [2024-07-24 21:51:39.878714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.832 [2024-07-24 21:51:39.878734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.878742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.832 [2024-07-24 21:51:39.887801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.832 [2024-07-24 21:51:39.887822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.887830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.832 [2024-07-24 21:51:39.898994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.832 [2024-07-24 21:51:39.899014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.899022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.832 [2024-07-24 21:51:39.910332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.832 [2024-07-24 21:51:39.910354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.910362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.832 [2024-07-24 21:51:39.920417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.832 [2024-07-24 21:51:39.920438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.920447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.832 [2024-07-24 21:51:39.930483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.832 [2024-07-24 21:51:39.930504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.930512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.832 [2024-07-24 21:51:39.938779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:31.832 [2024-07-24 21:51:39.938799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.832 [2024-07-24 21:51:39.938808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:39.949583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:39.949606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:39.949614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:39.958872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:39.958894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:39.958905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:39.968059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:39.968080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:39.968088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:39.977627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:39.977648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:39.977656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:39.986740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:39.986761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:39.986769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:39.995791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:39.995812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:39.995820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:40.006572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:40.006744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:40.006784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:40.016650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:40.016674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:40.016683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:40.027027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:40.027055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:40.027069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:40.037422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:40.037443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:40.037451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:40.046441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:40.046463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:40.046471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:40.057399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:40.057423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:40.057432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:40.065869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:40.065891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:40.065899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:40.075818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:40.075839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:40.075848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:40.084871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:40.084892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:40.084901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:40.095288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:40.095309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.093 [2024-07-24 21:51:40.095317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.093 [2024-07-24 21:51:40.104859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.093 [2024-07-24 21:51:40.104880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.094 [2024-07-24 21:51:40.104888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.094 [2024-07-24 21:51:40.113950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.094 [2024-07-24 21:51:40.113974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.094 [2024-07-24 21:51:40.113983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.094 [2024-07-24 21:51:40.123865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.094 [2024-07-24 21:51:40.123885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.094 [2024-07-24 21:51:40.123894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.094 [2024-07-24 21:51:40.134349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.094 [2024-07-24 21:51:40.134370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.094 [2024-07-24 21:51:40.134379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.094 [2024-07-24 21:51:40.142713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.094 [2024-07-24 21:51:40.142733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.094 [2024-07-24 21:51:40.142741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.094 [2024-07-24 21:51:40.153959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.094 [2024-07-24 21:51:40.153980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.094 [2024-07-24 21:51:40.153989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.094 [2024-07-24 21:51:40.163206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.094 [2024-07-24 21:51:40.163226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.094 [2024-07-24 21:51:40.163234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.094 [2024-07-24 21:51:40.173011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.094 [2024-07-24 21:51:40.173032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.094 [2024-07-24 21:51:40.173040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.094 [2024-07-24 21:51:40.181815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.094 [2024-07-24 21:51:40.181835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.094 [2024-07-24 21:51:40.181843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.094 [2024-07-24 21:51:40.192264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.094 [2024-07-24 21:51:40.192286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.094 [2024-07-24 21:51:40.192294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.094 [2024-07-24 21:51:40.201607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.094 [2024-07-24 21:51:40.201628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.094 [2024-07-24 21:51:40.201636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.353 [2024-07-24 21:51:40.211418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.353 [2024-07-24 21:51:40.211439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.353 [2024-07-24 21:51:40.211448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.353 [2024-07-24 21:51:40.221441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.353 [2024-07-24 21:51:40.221462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.353 [2024-07-24 21:51:40.221470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.353 [2024-07-24 21:51:40.230603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.353 [2024-07-24 21:51:40.230623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.353 [2024-07-24 21:51:40.230631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.353 [2024-07-24 21:51:40.239875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.353 [2024-07-24 21:51:40.239895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.353 [2024-07-24 21:51:40.239904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.353 [2024-07-24 21:51:40.249650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.353 [2024-07-24 21:51:40.249671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.353 [2024-07-24 21:51:40.249680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.353 [2024-07-24 21:51:40.258893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.353 [2024-07-24 21:51:40.258914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.353 [2024-07-24 21:51:40.258923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.353 [2024-07-24 21:51:40.269388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.353 [2024-07-24 21:51:40.269409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.353 [2024-07-24 21:51:40.269417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.353 [2024-07-24 21:51:40.279092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.279112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.279124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.287503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.287523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.287532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.297858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.297879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.297887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.307851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.307871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.307880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.317313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.317333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.317341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.327253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.327275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.327283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.335882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.335904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.335913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.346688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.346709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.346718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.355953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.355974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.355983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.366278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.366304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.366312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.374676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.374697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.374705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.384895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.384916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.384924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.394626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.394648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.394656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.404036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.404064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.404072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.413026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.413052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.413061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.423886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.423907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.423915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.433657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.433678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.433686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.442939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.442960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.442969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.452101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.452123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.452131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.354 [2024-07-24 21:51:40.463214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.354 [2024-07-24 21:51:40.463236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.354 [2024-07-24 21:51:40.463245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.614 [2024-07-24 21:51:40.473209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.614 [2024-07-24 21:51:40.473232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.614 [2024-07-24 21:51:40.473241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.614 [2024-07-24 21:51:40.482535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.614 [2024-07-24 21:51:40.482556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.614 [2024-07-24 21:51:40.482565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.614 [2024-07-24 21:51:40.491599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.614 [2024-07-24 21:51:40.491620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.614 [2024-07-24 21:51:40.491628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.614 [2024-07-24 21:51:40.501054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.614 [2024-07-24 21:51:40.501075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.614 [2024-07-24 21:51:40.501083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.614 [2024-07-24 21:51:40.510178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.614 [2024-07-24 21:51:40.510199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.614 [2024-07-24 21:51:40.510207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.614 [2024-07-24 21:51:40.519735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.614 [2024-07-24 21:51:40.519756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.614 [2024-07-24 21:51:40.519764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.614 [2024-07-24 21:51:40.529097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.614 [2024-07-24 21:51:40.529119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.614 [2024-07-24 21:51:40.529132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.614 [2024-07-24 21:51:40.538660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.614 [2024-07-24 21:51:40.538681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.614 [2024-07-24 21:51:40.538689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.614 [2024-07-24 21:51:40.548105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.614 [2024-07-24 21:51:40.548126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.614 [2024-07-24 21:51:40.548134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.614 [2024-07-24 21:51:40.557702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.614 [2024-07-24 21:51:40.557722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.614 [2024-07-24 21:51:40.557730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.614 [2024-07-24 21:51:40.567227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.614 [2024-07-24 21:51:40.567246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.614 [2024-07-24 21:51:40.567254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.575955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.575975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.575983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.585953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.585974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.585981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.595177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.595198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.595206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.603776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.603797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.603805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.613444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.613465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.613473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.623637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.623658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.623665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.633132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.633153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.633161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.641387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.641407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.641415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.651934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.651954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.651963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.660458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.660478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.660487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.670732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.670753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.670761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.679901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.679922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.679930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.689176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.689197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.689209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.698692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.698713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.698721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.708400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.708420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.708428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.717508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.717529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.717537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.615 [2024-07-24 21:51:40.726190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.615 [2024-07-24 21:51:40.726211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.615 [2024-07-24 21:51:40.726219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.737136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.737156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.737165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.746661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.746682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.746690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.754992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.755013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.755021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.764901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.764921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.764929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.773823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.773846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.773855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.784058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.784079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.784087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.792279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.792300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.792308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.802281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.802301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.802309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.812015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.812035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.812048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.821125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.821145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.821153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.829873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.829893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.829902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.840344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.840365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.840373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.848584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.848604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.848612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.859663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.859684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.859692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.868134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.868155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.868163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.877475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.877496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.877504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.887686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.887706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.887714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.896557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.896577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.896585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.905971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.905992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.906000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.915227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.915247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.915256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.925012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.925032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.925040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.933291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.933312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.933324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.943232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.943253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.943260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.952931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.952951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-07-24 21:51:40.952959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.875 [2024-07-24 21:51:40.962195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.875 [2024-07-24 21:51:40.962216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.876 [2024-07-24 21:51:40.962224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.876 [2024-07-24 21:51:40.971407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.876 [2024-07-24 21:51:40.971427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.876 [2024-07-24 21:51:40.971435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.876 [2024-07-24 21:51:40.981089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.876 [2024-07-24 21:51:40.981111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.876 [2024-07-24 21:51:40.981119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.876 [2024-07-24 21:51:40.989553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:32.876 [2024-07-24 21:51:40.989574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.876 [2024-07-24 21:51:40.989582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.135 [2024-07-24 21:51:41.000094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.135 [2024-07-24 21:51:41.000116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.135 [2024-07-24 21:51:41.000124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.135 [2024-07-24 21:51:41.008241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.135 [2024-07-24 21:51:41.008262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.135 [2024-07-24 21:51:41.008270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.135 [2024-07-24 21:51:41.018565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.135 [2024-07-24 21:51:41.018590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.135 [2024-07-24 21:51:41.018598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.135 [2024-07-24 21:51:41.028752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.135 [2024-07-24 21:51:41.028773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.135 [2024-07-24 21:51:41.028781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.135 [2024-07-24 21:51:41.037543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.135 [2024-07-24 21:51:41.037564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.135 [2024-07-24 21:51:41.037572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.135 [2024-07-24 21:51:41.046828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.135 [2024-07-24 21:51:41.046847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.046855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.055821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.055840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.055848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.065391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.065412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.065420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.074826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.074847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.074855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.084827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.084847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.084855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.093859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.093879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.093887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.102611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.102631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.102639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.112424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.112445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.112453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.122123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.122143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.122151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.130615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.130635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.130644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.140656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.140675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.140683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.149340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.149360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.149369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.159504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.159525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.159533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.169399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.169418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.169426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.177950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.177970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.177981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.186943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.186963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.186971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.196899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.196920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.196928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.206238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.206257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.206266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.216227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.216248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.216256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.232421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.232442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.232450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.136 [2024-07-24 21:51:41.243020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.136 [2024-07-24 21:51:41.243041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.136 [2024-07-24 21:51:41.243057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.396 [2024-07-24 21:51:41.251471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.251493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.251501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.264895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.264916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.264924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.275297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.275320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.275328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.287925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.287946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.287954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.298776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.298796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.298804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.315103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.315124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.315132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.325587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.325608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.325616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.334984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.335004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.335013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.344828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.344847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.344856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.353194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.353214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.353222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.363267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.363287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.363295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.373180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.373201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.373209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.383121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.383142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.383150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.392497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.392518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.392526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.405104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.405125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.405133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.417147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.417168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.417176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.426472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.426491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.426499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.437196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.437216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.437224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.447592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.447613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.447622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.456755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.456779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.456788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.469235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.469256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.469265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 [2024-07-24 21:51:41.478819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc754f0) 00:26:33.397 [2024-07-24 21:51:41.478840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.397 [2024-07-24 21:51:41.478848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.397 00:26:33.397 Latency(us) 00:26:33.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.397 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:33.397 nvme0n1 : 2.00 25503.34 99.62 0.00 0.00 5012.68 2421.98 27468.13 00:26:33.397 =================================================================================================================== 00:26:33.397 Total : 25503.34 99.62 0.00 0.00 5012.68 2421.98 27468.13 00:26:33.397 0 00:26:33.397 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:33.397 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:33.397 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:33.397 | .driver_specific 00:26:33.397 | .nvme_error 00:26:33.397 | .status_code 00:26:33.397 | .command_transient_transport_error' 00:26:33.397 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:33.657 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 200 > 0 )) 00:26:33.657 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3202318 00:26:33.657 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3202318 ']' 00:26:33.657 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3202318 00:26:33.658 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:33.658 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:33.658 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3202318 00:26:33.658 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:33.658 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:33.658 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3202318' 00:26:33.658 killing process with pid 3202318 00:26:33.658 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3202318 00:26:33.658 Received shutdown signal, test time was about 2.000000 seconds 00:26:33.658 00:26:33.658 Latency(us) 00:26:33.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.658 =================================================================================================================== 00:26:33.658 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:33.658 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3202318 00:26:33.917 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:33.917 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:33.917 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:33.917 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:33.917 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:33.917 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3202879 00:26:33.917 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3202879 /var/tmp/bperf.sock 00:26:33.917 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:33.917 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3202879 ']' 00:26:33.917 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:33.917 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:33.917 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:33.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:33.918 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:33.918 21:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.918 [2024-07-24 21:51:41.946796] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:26:33.918 [2024-07-24 21:51:41.946844] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3202879 ] 00:26:33.918 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:33.918 Zero copy mechanism will not be used. 00:26:33.918 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.918 [2024-07-24 21:51:42.001414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.177 [2024-07-24 21:51:42.074054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.747 21:51:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:34.747 21:51:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:34.747 21:51:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:34.747 21:51:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.007 21:51:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:35.007 21:51:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.007 21:51:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.007 21:51:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.007 21:51:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.007 21:51:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.267 nvme0n1 00:26:35.268 21:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:35.268 21:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.268 21:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.268 21:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.268 21:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:35.268 21:51:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:35.268 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.268 Zero copy mechanism will not be used. 00:26:35.268 Running I/O for 2 seconds... 00:26:35.268 [2024-07-24 21:51:43.368150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.268 [2024-07-24 21:51:43.368186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.268 [2024-07-24 21:51:43.368197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.383940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.383969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.383978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.398195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.398217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.398226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.413003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.413025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.413034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.428562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.428585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.428594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.442714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.442736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.442744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.457834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.457856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.457865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.472951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.472973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.472982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.488262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.488283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.488292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.502433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.502454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.502463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.516138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.516159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.516167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.529461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.529481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.529489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.542747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.542768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.542775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.556004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.556025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.556033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.569338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.569360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.569372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.582714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.589 [2024-07-24 21:51:43.582735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.589 [2024-07-24 21:51:43.582743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.589 [2024-07-24 21:51:43.596273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.590 [2024-07-24 21:51:43.596293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.590 [2024-07-24 21:51:43.596302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.590 [2024-07-24 21:51:43.609747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.590 [2024-07-24 21:51:43.609767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.590 [2024-07-24 21:51:43.609775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.590 [2024-07-24 21:51:43.629230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.590 [2024-07-24 21:51:43.629250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.590 [2024-07-24 21:51:43.629259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.645745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.645767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.645778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.659866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.659888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.659901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.673907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.673928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.673936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.687644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.687665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.687673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.700832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.700853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.700861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.720878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.720899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.720908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.737451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.737473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.737482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.750825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.750845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.750854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.771178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.771198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.771206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.787877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.787897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.787905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.810748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.810769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.810776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.831402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.831422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.831430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.847695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.847716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.847735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.860965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.860985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.860993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.874992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.875011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.875019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.888973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.888993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.889001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.902151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.902171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.902179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.922529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.922549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.922557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.939114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.939135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.883 [2024-07-24 21:51:43.939143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.883 [2024-07-24 21:51:43.962447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.883 [2024-07-24 21:51:43.962468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.884 [2024-07-24 21:51:43.962476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.884 [2024-07-24 21:51:43.976559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.884 [2024-07-24 21:51:43.976579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.884 [2024-07-24 21:51:43.976587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.884 [2024-07-24 21:51:43.989522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:35.884 [2024-07-24 21:51:43.989546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.884 [2024-07-24 21:51:43.989553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.145 [2024-07-24 21:51:44.003822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.145 [2024-07-24 21:51:44.003843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.145 [2024-07-24 21:51:44.003851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.145 [2024-07-24 21:51:44.023031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.145 [2024-07-24 21:51:44.023058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.145 [2024-07-24 21:51:44.023066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.145 [2024-07-24 21:51:44.039968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.145 [2024-07-24 21:51:44.039990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.145 [2024-07-24 21:51:44.039998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.145 [2024-07-24 21:51:44.056005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.145 [2024-07-24 21:51:44.056026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.145 [2024-07-24 21:51:44.056035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.145 [2024-07-24 21:51:44.070798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.145 [2024-07-24 21:51:44.070819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.145 [2024-07-24 21:51:44.070828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.145 [2024-07-24 21:51:44.087015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.145 [2024-07-24 21:51:44.087036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.145 [2024-07-24 21:51:44.087049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.145 [2024-07-24 21:51:44.101959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.145 [2024-07-24 21:51:44.101980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.145 [2024-07-24 21:51:44.101988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.145 [2024-07-24 21:51:44.124600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.145 [2024-07-24 21:51:44.124621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.146 [2024-07-24 21:51:44.124629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.146 [2024-07-24 21:51:44.146013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.146 [2024-07-24 21:51:44.146034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.146 [2024-07-24 21:51:44.146049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.146 [2024-07-24 21:51:44.162701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.146 [2024-07-24 21:51:44.162723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.146 [2024-07-24 21:51:44.162730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.146 [2024-07-24 21:51:44.185908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.146 [2024-07-24 21:51:44.185935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.146 [2024-07-24 21:51:44.185944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.146 [2024-07-24 21:51:44.206872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.146 [2024-07-24 21:51:44.206892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.146 [2024-07-24 21:51:44.206899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.146 [2024-07-24 21:51:44.223795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.146 [2024-07-24 21:51:44.223816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.146 [2024-07-24 21:51:44.223824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.146 [2024-07-24 21:51:44.237035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.146 [2024-07-24 21:51:44.237062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.146 [2024-07-24 21:51:44.237070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.146 [2024-07-24 21:51:44.257186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.146 [2024-07-24 21:51:44.257207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.146 [2024-07-24 21:51:44.257215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.274051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.274071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.274080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.298032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.298057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.298068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.314166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.314187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.314195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.327486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.327506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.327515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.340740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.340761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.340769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.353928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.353949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.353958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.367163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.367183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.367192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.380371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.380391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.380400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.393556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.393577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.393586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.406632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.406653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.406662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.419854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.419875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.419883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.433164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.433184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.433192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.446560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.446581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.446589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.459723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.459744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.459752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.473028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.473053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.473062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.486394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.486414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.486422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.499678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.499698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.499706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.407 [2024-07-24 21:51:44.512857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.407 [2024-07-24 21:51:44.512878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-07-24 21:51:44.512886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.526041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.526067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.526080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.539541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.539561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.539569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.552691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.552711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.552719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.565887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.565907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.565915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.579032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.579059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.579067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.592194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.592216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.592224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.605339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.605361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.605369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.618774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.618795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.618803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.631955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.631977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.631986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.645119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.645144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.645152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.658390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.658411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.658419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.671823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.671844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.671853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.685002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.685022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.685031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.698283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.698303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.698311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.711519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.711541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.711550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.724801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.724822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.724831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.738248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.738269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.738278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.751720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.751741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.751749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.765104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.765124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.765132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.669 [2024-07-24 21:51:44.778281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.669 [2024-07-24 21:51:44.778301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-07-24 21:51:44.778309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.930 [2024-07-24 21:51:44.792072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.930 [2024-07-24 21:51:44.792094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.930 [2024-07-24 21:51:44.792102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.930 [2024-07-24 21:51:44.805528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.930 [2024-07-24 21:51:44.805549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.930 [2024-07-24 21:51:44.805558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.930 [2024-07-24 21:51:44.818718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.930 [2024-07-24 21:51:44.818739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.930 [2024-07-24 21:51:44.818748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.930 [2024-07-24 21:51:44.831858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.930 [2024-07-24 21:51:44.831878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.930 [2024-07-24 21:51:44.831886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.930 [2024-07-24 21:51:44.845120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.930 [2024-07-24 21:51:44.845140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.930 [2024-07-24 21:51:44.845148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.930 [2024-07-24 21:51:44.858542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.930 [2024-07-24 21:51:44.858563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.930 [2024-07-24 21:51:44.858571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.930 [2024-07-24 21:51:44.871960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.930 [2024-07-24 21:51:44.871985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.930 [2024-07-24 21:51:44.871993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.930 [2024-07-24 21:51:44.885156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.930 [2024-07-24 21:51:44.885176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.930 [2024-07-24 21:51:44.885184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.931 [2024-07-24 21:51:44.898764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.931 [2024-07-24 21:51:44.898785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.931 [2024-07-24 21:51:44.898793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.931 [2024-07-24 21:51:44.912152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.931 [2024-07-24 21:51:44.912173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.931 [2024-07-24 21:51:44.912181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.931 [2024-07-24 21:51:44.925426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.931 [2024-07-24 21:51:44.925446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.931 [2024-07-24 21:51:44.925454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.931 [2024-07-24 21:51:44.938877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.931 [2024-07-24 21:51:44.938898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.931 [2024-07-24 21:51:44.938906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.931 [2024-07-24 21:51:44.952114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.931 [2024-07-24 21:51:44.952135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.931 [2024-07-24 21:51:44.952143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.931 [2024-07-24 21:51:44.965494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.931 [2024-07-24 21:51:44.965514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.931 [2024-07-24 21:51:44.965522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.931 [2024-07-24 21:51:44.978883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.931 [2024-07-24 21:51:44.978903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.931 [2024-07-24 21:51:44.978912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.931 [2024-07-24 21:51:44.992304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.931 [2024-07-24 21:51:44.992324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.931 [2024-07-24 21:51:44.992332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.931 [2024-07-24 21:51:45.005485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.931 [2024-07-24 21:51:45.005506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.931 [2024-07-24 21:51:45.005514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.931 [2024-07-24 21:51:45.018618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.931 [2024-07-24 21:51:45.018638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.931 [2024-07-24 21:51:45.018646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.931 [2024-07-24 21:51:45.031743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.931 [2024-07-24 21:51:45.031763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.931 [2024-07-24 21:51:45.031772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.931 [2024-07-24 21:51:45.045205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:36.931 [2024-07-24 21:51:45.045227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.931 [2024-07-24 21:51:45.045235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.058413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.058434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.058443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.071768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.071790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.071799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.084995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.085016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.085023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.098406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.098427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.098439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.111531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.111552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.111559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.124695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.124715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.124724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.137873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.137892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.137900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.150932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.150952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.150960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.164070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.164090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.164098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.177201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.177221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.177229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.190409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.190429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.190438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.203500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.203520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.203528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.216665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.216689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.216698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.229843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.229863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.229871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.242943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.242964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.242972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.256132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.256152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.256160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.269358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.269378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.269387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.282650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.282670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.282678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.192 [2024-07-24 21:51:45.295882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.192 [2024-07-24 21:51:45.295902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.192 [2024-07-24 21:51:45.295910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.452 [2024-07-24 21:51:45.309139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.452 [2024-07-24 21:51:45.309160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.452 [2024-07-24 21:51:45.309169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.452 [2024-07-24 21:51:45.322341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.452 [2024-07-24 21:51:45.322360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.452 [2024-07-24 21:51:45.322372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.452 [2024-07-24 21:51:45.335543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390030) 00:26:37.452 [2024-07-24 21:51:45.335563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.452 [2024-07-24 21:51:45.335571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.452 00:26:37.452 Latency(us) 00:26:37.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.452 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:37.452 nvme0n1 : 2.00 2112.92 264.11 0.00 0.00 7568.53 6297.15 24618.74 00:26:37.452 =================================================================================================================== 00:26:37.452 Total : 2112.92 264.11 0.00 0.00 7568.53 6297.15 24618.74 00:26:37.452 0 00:26:37.452 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:37.452 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:37.452 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:37.452 | .driver_specific 00:26:37.452 | .nvme_error 00:26:37.452 | .status_code 00:26:37.452 | .command_transient_transport_error' 00:26:37.452 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:37.452 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:26:37.452 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3202879 00:26:37.452 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3202879 ']' 00:26:37.452 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3202879 00:26:37.452 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:37.452 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:37.452 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3202879 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3202879' 00:26:37.713 killing process with pid 3202879 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3202879 00:26:37.713 Received shutdown signal, test time was about 2.000000 seconds 00:26:37.713 00:26:37.713 Latency(us) 00:26:37.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.713 =================================================================================================================== 00:26:37.713 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3202879 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3203492 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3203492 /var/tmp/bperf.sock 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3203492 ']' 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:37.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:37.713 21:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:37.713 [2024-07-24 21:51:45.803783] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:26:37.713 [2024-07-24 21:51:45.803830] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3203492 ] 00:26:37.713 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.973 [2024-07-24 21:51:45.859093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.973 [2024-07-24 21:51:45.927516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.543 21:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:38.543 21:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:38.543 21:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:38.543 21:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:38.802 21:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:38.802 21:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.802 21:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.803 21:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.803 21:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.803 21:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.061 nvme0n1 00:26:39.061 21:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:39.061 21:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.061 21:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.061 21:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.061 21:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:39.061 21:51:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:39.320 Running I/O for 2 seconds... 00:26:39.320 [2024-07-24 21:51:47.288974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fe720 00:26:39.320 [2024-07-24 21:51:47.289729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.320 [2024-07-24 21:51:47.289762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.320 [2024-07-24 21:51:47.298766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.320 [2024-07-24 21:51:47.299010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.320 [2024-07-24 21:51:47.299034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.320 [2024-07-24 21:51:47.308378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.320 [2024-07-24 21:51:47.308617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.320 [2024-07-24 21:51:47.308638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.320 [2024-07-24 21:51:47.318009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.320 [2024-07-24 21:51:47.318270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.320 [2024-07-24 21:51:47.318290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.320 [2024-07-24 21:51:47.327546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.320 [2024-07-24 21:51:47.327803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.320 [2024-07-24 21:51:47.327822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.320 [2024-07-24 21:51:47.337168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.320 [2024-07-24 21:51:47.337430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.320 [2024-07-24 21:51:47.337450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.320 [2024-07-24 21:51:47.346642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.320 [2024-07-24 21:51:47.346893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.320 [2024-07-24 21:51:47.346913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.320 [2024-07-24 21:51:47.356133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.320 [2024-07-24 21:51:47.356383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.320 [2024-07-24 21:51:47.356410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.320 [2024-07-24 21:51:47.365721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.320 [2024-07-24 21:51:47.365978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.320 [2024-07-24 21:51:47.365997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.320 [2024-07-24 21:51:47.375240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.320 [2024-07-24 21:51:47.375491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.320 [2024-07-24 21:51:47.375511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.320 [2024-07-24 21:51:47.384765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.320 [2024-07-24 21:51:47.385015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.320 [2024-07-24 21:51:47.385034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.320 [2024-07-24 21:51:47.394367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.320 [2024-07-24 21:51:47.394619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.320 [2024-07-24 21:51:47.394639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.321 [2024-07-24 21:51:47.403872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.321 [2024-07-24 21:51:47.404123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.321 [2024-07-24 21:51:47.404143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.321 [2024-07-24 21:51:47.413420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.321 [2024-07-24 21:51:47.413672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.321 [2024-07-24 21:51:47.413691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.321 [2024-07-24 21:51:47.422937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.321 [2024-07-24 21:51:47.423193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.321 [2024-07-24 21:51:47.423212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.321 [2024-07-24 21:51:47.432493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.321 [2024-07-24 21:51:47.432749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.321 [2024-07-24 21:51:47.432769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.442287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.442536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.442555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.451777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.452021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.452040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.461337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.461592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.461611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.471102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.471354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.471375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.480598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.480849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.480868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.490156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.490413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.490432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.499975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.500236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.500256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.509621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.509877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.509896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.519195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.519448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.519467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.528714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.528962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.528981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.538226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.538483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.538501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.547796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.548039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.548063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.557446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.557693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.557712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.566988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.567248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.567266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.576529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.576779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.576798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.586005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.586265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.586285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.595622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.595866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.595885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.605116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.605367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.605389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.614649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.614903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.614923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.624197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.624448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.624468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.633700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.633946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.633964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.643223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.643474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.643492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.652819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.653070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.653089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.662296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.662547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.662566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.671889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.672141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.580 [2024-07-24 21:51:47.672160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.580 [2024-07-24 21:51:47.681394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.580 [2024-07-24 21:51:47.681644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.581 [2024-07-24 21:51:47.681664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.581 [2024-07-24 21:51:47.690918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.581 [2024-07-24 21:51:47.691186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.581 [2024-07-24 21:51:47.691206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.700763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.701014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.701033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.710272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.710520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.710540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.719848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.720103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.720122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.729365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.729611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.729629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.738855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.739104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.739123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.748408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.748658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.748676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.757913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.758169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.758188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.767435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.767681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.767700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.776988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.777241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.777262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.786478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.786725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.786744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.796010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.796264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.796283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.805551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.805800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.805818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.815193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.815445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.815464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.824741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.824992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.825010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.834260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.834515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.834534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.843786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.844039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.844063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.853340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.853592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.853614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.862827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.863067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.863086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.872332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fcdd0 00:26:39.841 [2024-07-24 21:51:47.874406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.874424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.885761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fe720 00:26:39.841 [2024-07-24 21:51:47.887493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.887513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.896398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fa7d8 00:26:39.841 [2024-07-24 21:51:47.896608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.896627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.905959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fa7d8 00:26:39.841 [2024-07-24 21:51:47.906311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.906330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.915504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fa7d8 00:26:39.841 [2024-07-24 21:51:47.916095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.916114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.924954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fa7d8 00:26:39.841 [2024-07-24 21:51:47.925138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.841 [2024-07-24 21:51:47.925157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.841 [2024-07-24 21:51:47.934478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fa7d8 00:26:39.841 [2024-07-24 21:51:47.935646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.842 [2024-07-24 21:51:47.935665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.842 [2024-07-24 21:51:47.946403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fef90 00:26:39.842 [2024-07-24 21:51:47.947539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.842 [2024-07-24 21:51:47.947558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.842 [2024-07-24 21:51:47.955857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f6458 00:26:40.102 [2024-07-24 21:51:47.956563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:47.956583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:47.965131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fb048 00:26:40.102 [2024-07-24 21:51:47.965935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:47.965954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:47.974236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190eea00 00:26:40.102 [2024-07-24 21:51:47.975805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:47.975824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:47.986788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190ed920 00:26:40.102 [2024-07-24 21:51:47.987918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:47.987937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:47.998169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fd640 00:26:40.102 [2024-07-24 21:51:47.999412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:47.999431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:48.006717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fdeb0 00:26:40.102 [2024-07-24 21:51:48.007534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:48.007552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:48.015889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e0ea0 00:26:40.102 [2024-07-24 21:51:48.016756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:48.016775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:48.026199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fdeb0 00:26:40.102 [2024-07-24 21:51:48.027469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:48.027489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:48.034626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190eb328 00:26:40.102 [2024-07-24 21:51:48.035565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:48.035583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:48.043702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190eb328 00:26:40.102 [2024-07-24 21:51:48.044737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:48.044756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:48.052773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190eb328 00:26:40.102 [2024-07-24 21:51:48.053727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:48.053747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:48.062051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190eb328 00:26:40.102 [2024-07-24 21:51:48.062999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:48.063017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:48.071330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190eb328 00:26:40.102 [2024-07-24 21:51:48.072303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:48.072322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:48.080457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190eb328 00:26:40.102 [2024-07-24 21:51:48.081475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:48.081494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:48.089638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190eb328 00:26:40.102 [2024-07-24 21:51:48.090583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:48.090603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:48.098754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190eb328 00:26:40.102 [2024-07-24 21:51:48.099693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:48.099713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:48.107862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190eb328 00:26:40.102 [2024-07-24 21:51:48.108955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:48.108974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:48.116707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f92c0 00:26:40.102 [2024-07-24 21:51:48.119710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.102 [2024-07-24 21:51:48.119728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:40.102 [2024-07-24 21:51:48.133418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f0ff8 00:26:40.103 [2024-07-24 21:51:48.134357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.103 [2024-07-24 21:51:48.134375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.103 [2024-07-24 21:51:48.143255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190eea00 00:26:40.103 [2024-07-24 21:51:48.143473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.103 [2024-07-24 21:51:48.143492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:40.103 [2024-07-24 21:51:48.152808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190eea00 00:26:40.103 [2024-07-24 21:51:48.153000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.103 [2024-07-24 21:51:48.153017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:40.103 [2024-07-24 21:51:48.162330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190eea00 00:26:40.103 [2024-07-24 21:51:48.162867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.103 [2024-07-24 21:51:48.162885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:40.103 [2024-07-24 21:51:48.172276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190eea00 00:26:40.103 [2024-07-24 21:51:48.174687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.103 [2024-07-24 21:51:48.174706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.103 [2024-07-24 21:51:48.184855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fef90 00:26:40.103 [2024-07-24 21:51:48.185685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.103 [2024-07-24 21:51:48.185703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:40.103 [2024-07-24 21:51:48.194490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f0bc0 00:26:40.103 [2024-07-24 21:51:48.194750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.103 [2024-07-24 21:51:48.194768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.103 [2024-07-24 21:51:48.204095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f0bc0 00:26:40.103 [2024-07-24 21:51:48.204466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.103 [2024-07-24 21:51:48.204488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.103 [2024-07-24 21:51:48.215929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fe2e8 00:26:40.103 [2024-07-24 21:51:48.217369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.103 [2024-07-24 21:51:48.217388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.363 [2024-07-24 21:51:48.227801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f0bc0 00:26:40.363 [2024-07-24 21:51:48.228727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-24 21:51:48.228746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:40.363 [2024-07-24 21:51:48.237369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f0bc0 00:26:40.363 [2024-07-24 21:51:48.238014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-24 21:51:48.238032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:40.363 [2024-07-24 21:51:48.246896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f0bc0 00:26:40.363 [2024-07-24 21:51:48.247111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-24 21:51:48.247130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:40.363 [2024-07-24 21:51:48.256433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f0bc0 00:26:40.363 [2024-07-24 21:51:48.256665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-24 21:51:48.256684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:40.363 [2024-07-24 21:51:48.267790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fe2e8 00:26:40.363 [2024-07-24 21:51:48.268927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-24 21:51:48.268947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:40.363 [2024-07-24 21:51:48.278102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f9b30 00:26:40.363 [2024-07-24 21:51:48.279008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-24 21:51:48.279026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.287096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190efae0 00:26:40.364 [2024-07-24 21:51:48.288248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.288267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.297768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190ee190 00:26:40.364 [2024-07-24 21:51:48.298842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.298861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.307489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fc998 00:26:40.364 [2024-07-24 21:51:48.308539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.308559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.316662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e1f80 00:26:40.364 [2024-07-24 21:51:48.317876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.317895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.325700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f0ff8 00:26:40.364 [2024-07-24 21:51:48.328645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.328664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.340001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f0bc0 00:26:40.364 [2024-07-24 21:51:48.340936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.340955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.349036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f2510 00:26:40.364 [2024-07-24 21:51:48.350715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.350733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.360209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f81e0 00:26:40.364 [2024-07-24 21:51:48.361329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.361348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.371209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f4b08 00:26:40.364 [2024-07-24 21:51:48.372383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.372402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.380751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f4b08 00:26:40.364 [2024-07-24 21:51:48.381577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.381596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.390193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f4b08 00:26:40.364 [2024-07-24 21:51:48.390723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.390742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.399768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f4b08 00:26:40.364 [2024-07-24 21:51:48.400360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.400379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.409241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f4b08 00:26:40.364 [2024-07-24 21:51:48.409658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.409676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.418940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f4b08 00:26:40.364 [2024-07-24 21:51:48.419441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.419460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.430279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190ec408 00:26:40.364 [2024-07-24 21:51:48.431458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.431477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.440394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f81e0 00:26:40.364 [2024-07-24 21:51:48.441216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.441235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.449473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7970 00:26:40.364 [2024-07-24 21:51:48.450530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.450549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.458594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f81e0 00:26:40.364 [2024-07-24 21:51:48.459743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.459762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:40.364 [2024-07-24 21:51:48.471197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190feb58 00:26:40.364 [2024-07-24 21:51:48.472276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-24 21:51:48.472298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.483082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f1868 00:26:40.625 [2024-07-24 21:51:48.484024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.484048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.492120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190ee5c8 00:26:40.625 [2024-07-24 21:51:48.492860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.492878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.501612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e12d8 00:26:40.625 [2024-07-24 21:51:48.502581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.502600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.510725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e8d30 00:26:40.625 [2024-07-24 21:51:48.511687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.511706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.519814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fb048 00:26:40.625 [2024-07-24 21:51:48.520790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.520809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.528959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e12d8 00:26:40.625 [2024-07-24 21:51:48.529835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.529854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.541526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190ff3c8 00:26:40.625 [2024-07-24 21:51:48.543461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.543479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.553887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e8d30 00:26:40.625 [2024-07-24 21:51:48.554822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.554840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.563567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e8d30 00:26:40.625 [2024-07-24 21:51:48.563795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.563813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.573095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e8d30 00:26:40.625 [2024-07-24 21:51:48.573927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.573945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.584728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fa3a0 00:26:40.625 [2024-07-24 21:51:48.585924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.585942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.594271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.625 [2024-07-24 21:51:48.595297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.595316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.603376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.625 [2024-07-24 21:51:48.604325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.604344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.612560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.625 [2024-07-24 21:51:48.613526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.613546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.621715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.625 [2024-07-24 21:51:48.622682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.622701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.630827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.625 [2024-07-24 21:51:48.631845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.631865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.625 [2024-07-24 21:51:48.639997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.625 [2024-07-24 21:51:48.640969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.625 [2024-07-24 21:51:48.640988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.626 [2024-07-24 21:51:48.649112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.626 [2024-07-24 21:51:48.650079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.626 [2024-07-24 21:51:48.650099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.626 [2024-07-24 21:51:48.658234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.626 [2024-07-24 21:51:48.659118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.626 [2024-07-24 21:51:48.659138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.626 [2024-07-24 21:51:48.667392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.626 [2024-07-24 21:51:48.668385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.626 [2024-07-24 21:51:48.668403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.626 [2024-07-24 21:51:48.676601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.626 [2024-07-24 21:51:48.677486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.626 [2024-07-24 21:51:48.677504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.626 [2024-07-24 21:51:48.685739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.626 [2024-07-24 21:51:48.686709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.626 [2024-07-24 21:51:48.686728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.626 [2024-07-24 21:51:48.694874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.626 [2024-07-24 21:51:48.695820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.626 [2024-07-24 21:51:48.695838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.626 [2024-07-24 21:51:48.703953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.626 [2024-07-24 21:51:48.704941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.626 [2024-07-24 21:51:48.704959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.626 [2024-07-24 21:51:48.713148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.626 [2024-07-24 21:51:48.714140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.626 [2024-07-24 21:51:48.714159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.626 [2024-07-24 21:51:48.722258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.626 [2024-07-24 21:51:48.723214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.626 [2024-07-24 21:51:48.723236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.626 [2024-07-24 21:51:48.731364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.626 [2024-07-24 21:51:48.732324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.626 [2024-07-24 21:51:48.732343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.740730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.886 [2024-07-24 21:51:48.741738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.741758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.749990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.886 [2024-07-24 21:51:48.750888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.750906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.759029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.886 [2024-07-24 21:51:48.759997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.760016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.768018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.886 [2024-07-24 21:51:48.768982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.769001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.777117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.886 [2024-07-24 21:51:48.778107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.778126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.786290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.886 [2024-07-24 21:51:48.787265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.787283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.795394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.886 [2024-07-24 21:51:48.796335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.796353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.804488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.886 [2024-07-24 21:51:48.805462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.805481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.813646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.886 [2024-07-24 21:51:48.814609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.814629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.822748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.886 [2024-07-24 21:51:48.823716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.823735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.831998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.886 [2024-07-24 21:51:48.832924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.832943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.841098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.886 [2024-07-24 21:51:48.842032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.842055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.850170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.886 [2024-07-24 21:51:48.851146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.851164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.859334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.886 [2024-07-24 21:51:48.860213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.860233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.868470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.886 [2024-07-24 21:51:48.869410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.869429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.877581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.886 [2024-07-24 21:51:48.878553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.878572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.886754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.886 [2024-07-24 21:51:48.887744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.887763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.895874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.886 [2024-07-24 21:51:48.896843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.896861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.905067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.886 [2024-07-24 21:51:48.906025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.906047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.914361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.886 [2024-07-24 21:51:48.915284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.915303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.923770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.886 [2024-07-24 21:51:48.924760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.924780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.933082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.886 [2024-07-24 21:51:48.934032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.934055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.942231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.886 [2024-07-24 21:51:48.943192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.886 [2024-07-24 21:51:48.943210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.886 [2024-07-24 21:51:48.951343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.887 [2024-07-24 21:51:48.952286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.887 [2024-07-24 21:51:48.952305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.887 [2024-07-24 21:51:48.960532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.887 [2024-07-24 21:51:48.961486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.887 [2024-07-24 21:51:48.961509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.887 [2024-07-24 21:51:48.969676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.887 [2024-07-24 21:51:48.970594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.887 [2024-07-24 21:51:48.970613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.887 [2024-07-24 21:51:48.978798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.887 [2024-07-24 21:51:48.979702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.887 [2024-07-24 21:51:48.979722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.887 [2024-07-24 21:51:48.987993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:40.887 [2024-07-24 21:51:48.988885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.887 [2024-07-24 21:51:48.988904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.887 [2024-07-24 21:51:48.997160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:40.887 [2024-07-24 21:51:48.998077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.887 [2024-07-24 21:51:48.998095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.146 [2024-07-24 21:51:49.006553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:41.146 [2024-07-24 21:51:49.007507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.146 [2024-07-24 21:51:49.007525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.146 [2024-07-24 21:51:49.015727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:41.146 [2024-07-24 21:51:49.016633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.146 [2024-07-24 21:51:49.016652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.146 [2024-07-24 21:51:49.024846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:41.146 [2024-07-24 21:51:49.025826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.146 [2024-07-24 21:51:49.025845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.146 [2024-07-24 21:51:49.034024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:41.146 [2024-07-24 21:51:49.034977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.146 [2024-07-24 21:51:49.034996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.146 [2024-07-24 21:51:49.043192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:41.146 [2024-07-24 21:51:49.044130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.146 [2024-07-24 21:51:49.044153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.052317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:41.147 [2024-07-24 21:51:49.053290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.053308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.061514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:41.147 [2024-07-24 21:51:49.062407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.062426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.070716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:41.147 [2024-07-24 21:51:49.071615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.071634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.079849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:41.147 [2024-07-24 21:51:49.080801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.080820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.089171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:41.147 [2024-07-24 21:51:49.090110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.090129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.098294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f7da8 00:26:41.147 [2024-07-24 21:51:49.099193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.099212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.107439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e6b70 00:26:41.147 [2024-07-24 21:51:49.109421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.109440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.118469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190e4578 00:26:41.147 [2024-07-24 21:51:49.120183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.120203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.131833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fb048 00:26:41.147 [2024-07-24 21:51:49.133206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.133226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.144315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190f4298 00:26:41.147 [2024-07-24 21:51:49.145465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.145486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.156024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fb8b8 00:26:41.147 [2024-07-24 21:51:49.156268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.156290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.166788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fb8b8 00:26:41.147 [2024-07-24 21:51:49.167026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.167053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.177503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fb8b8 00:26:41.147 [2024-07-24 21:51:49.177740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.177760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.187569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fb8b8 00:26:41.147 [2024-07-24 21:51:49.187822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.187841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.197595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fb8b8 00:26:41.147 [2024-07-24 21:51:49.197839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.197858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.207487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fb8b8 00:26:41.147 [2024-07-24 21:51:49.207729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.207748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.217253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fb8b8 00:26:41.147 [2024-07-24 21:51:49.217487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.217506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.226814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fb8b8 00:26:41.147 [2024-07-24 21:51:49.227055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.227074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.236507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fb8b8 00:26:41.147 [2024-07-24 21:51:49.236743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.236762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.246059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fb8b8 00:26:41.147 [2024-07-24 21:51:49.246294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.246313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.147 [2024-07-24 21:51:49.255699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf23420) with pdu=0x2000190fb8b8 00:26:41.147 [2024-07-24 21:51:49.255935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.147 [2024-07-24 21:51:49.255953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.147 00:26:41.147 Latency(us) 00:26:41.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.147 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:41.147 nvme0n1 : 2.00 25869.59 101.05 0.00 0.00 4939.07 2364.99 32141.13 00:26:41.147 =================================================================================================================== 00:26:41.147 Total : 25869.59 101.05 0.00 0.00 4939.07 2364.99 32141.13 00:26:41.407 0 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:41.407 | .driver_specific 00:26:41.407 | .nvme_error 00:26:41.407 | .status_code 00:26:41.407 | .command_transient_transport_error' 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 203 > 0 )) 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3203492 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3203492 ']' 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3203492 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3203492 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3203492' 00:26:41.407 killing process with pid 3203492 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3203492 00:26:41.407 Received shutdown signal, test time was about 2.000000 seconds 00:26:41.407 00:26:41.407 Latency(us) 00:26:41.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.407 =================================================================================================================== 00:26:41.407 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:41.407 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3203492 00:26:41.666 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:41.666 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:41.666 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:41.666 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:41.666 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:41.666 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3204188 00:26:41.666 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3204188 /var/tmp/bperf.sock 00:26:41.666 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:41.666 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3204188 ']' 00:26:41.666 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:41.666 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:41.666 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:41.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:41.666 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:41.666 21:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:41.666 [2024-07-24 21:51:49.730436] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:26:41.666 [2024-07-24 21:51:49.730484] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3204188 ] 00:26:41.666 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:41.666 Zero copy mechanism will not be used. 00:26:41.666 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.926 [2024-07-24 21:51:49.785627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.926 [2024-07-24 21:51:49.865345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.495 21:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:42.495 21:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:42.495 21:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:42.495 21:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:42.755 21:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:42.755 21:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.755 21:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.755 21:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.755 21:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.755 21:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:43.014 nvme0n1 00:26:43.014 21:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:43.014 21:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.014 21:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:43.014 21:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.014 21:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:43.014 21:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:43.014 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:43.014 Zero copy mechanism will not be used. 00:26:43.014 Running I/O for 2 seconds... 00:26:43.274 [2024-07-24 21:51:51.139240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.274 [2024-07-24 21:51:51.139965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.274 [2024-07-24 21:51:51.139996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.274 [2024-07-24 21:51:51.159238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.274 [2024-07-24 21:51:51.159956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.274 [2024-07-24 21:51:51.159980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.274 [2024-07-24 21:51:51.179530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.274 [2024-07-24 21:51:51.180219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.274 [2024-07-24 21:51:51.180241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.274 [2024-07-24 21:51:51.198781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.274 [2024-07-24 21:51:51.199457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.274 [2024-07-24 21:51:51.199477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.274 [2024-07-24 21:51:51.218787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.274 [2024-07-24 21:51:51.219464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.274 [2024-07-24 21:51:51.219485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.274 [2024-07-24 21:51:51.237219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.274 [2024-07-24 21:51:51.237734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.274 [2024-07-24 21:51:51.237755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.274 [2024-07-24 21:51:51.255798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.274 [2024-07-24 21:51:51.256558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.274 [2024-07-24 21:51:51.256579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.275 [2024-07-24 21:51:51.274091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.275 [2024-07-24 21:51:51.274394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.275 [2024-07-24 21:51:51.274414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.275 [2024-07-24 21:51:51.294817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.275 [2024-07-24 21:51:51.295489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.275 [2024-07-24 21:51:51.295508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.275 [2024-07-24 21:51:51.315606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.275 [2024-07-24 21:51:51.316222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.275 [2024-07-24 21:51:51.316241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.275 [2024-07-24 21:51:51.334883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.275 [2024-07-24 21:51:51.335574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.275 [2024-07-24 21:51:51.335594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.275 [2024-07-24 21:51:51.355674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.275 [2024-07-24 21:51:51.356486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.275 [2024-07-24 21:51:51.356507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.275 [2024-07-24 21:51:51.377427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.275 [2024-07-24 21:51:51.378025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.275 [2024-07-24 21:51:51.378048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.535 [2024-07-24 21:51:51.398326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.535 [2024-07-24 21:51:51.398932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.535 [2024-07-24 21:51:51.398951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.535 [2024-07-24 21:51:51.419744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.535 [2024-07-24 21:51:51.420407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.535 [2024-07-24 21:51:51.420427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.535 [2024-07-24 21:51:51.439727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.535 [2024-07-24 21:51:51.440526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.535 [2024-07-24 21:51:51.440555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.535 [2024-07-24 21:51:51.462028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.535 [2024-07-24 21:51:51.462406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.535 [2024-07-24 21:51:51.462426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.535 [2024-07-24 21:51:51.484820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.535 [2024-07-24 21:51:51.485523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.535 [2024-07-24 21:51:51.485543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.535 [2024-07-24 21:51:51.506281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.535 [2024-07-24 21:51:51.506884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.535 [2024-07-24 21:51:51.506904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.535 [2024-07-24 21:51:51.527378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.535 [2024-07-24 21:51:51.527995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.535 [2024-07-24 21:51:51.528014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.535 [2024-07-24 21:51:51.548056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.535 [2024-07-24 21:51:51.548574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.535 [2024-07-24 21:51:51.548593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.535 [2024-07-24 21:51:51.569461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.535 [2024-07-24 21:51:51.569931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.535 [2024-07-24 21:51:51.569953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.535 [2024-07-24 21:51:51.592108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.535 [2024-07-24 21:51:51.592948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.535 [2024-07-24 21:51:51.592967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.535 [2024-07-24 21:51:51.612119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.535 [2024-07-24 21:51:51.612496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.535 [2024-07-24 21:51:51.612515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.535 [2024-07-24 21:51:51.632575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.535 [2024-07-24 21:51:51.633125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.535 [2024-07-24 21:51:51.633145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.794 [2024-07-24 21:51:51.654651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.794 [2024-07-24 21:51:51.655476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.794 [2024-07-24 21:51:51.655496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.794 [2024-07-24 21:51:51.674852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.794 [2024-07-24 21:51:51.675478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.794 [2024-07-24 21:51:51.675498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.794 [2024-07-24 21:51:51.694974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.795 [2024-07-24 21:51:51.695830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.795 [2024-07-24 21:51:51.695849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.795 [2024-07-24 21:51:51.716407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.795 [2024-07-24 21:51:51.717187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.795 [2024-07-24 21:51:51.717208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.795 [2024-07-24 21:51:51.737520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.795 [2024-07-24 21:51:51.737994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.795 [2024-07-24 21:51:51.738015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.795 [2024-07-24 21:51:51.758772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.795 [2024-07-24 21:51:51.759541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.795 [2024-07-24 21:51:51.759560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.795 [2024-07-24 21:51:51.780169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.795 [2024-07-24 21:51:51.780768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.795 [2024-07-24 21:51:51.780787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.795 [2024-07-24 21:51:51.800922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.795 [2024-07-24 21:51:51.801640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.795 [2024-07-24 21:51:51.801660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.795 [2024-07-24 21:51:51.823640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.795 [2024-07-24 21:51:51.824272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.795 [2024-07-24 21:51:51.824292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.795 [2024-07-24 21:51:51.845365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.795 [2024-07-24 21:51:51.846002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.795 [2024-07-24 21:51:51.846022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.795 [2024-07-24 21:51:51.866051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.795 [2024-07-24 21:51:51.866531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.795 [2024-07-24 21:51:51.866551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.795 [2024-07-24 21:51:51.887426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:43.795 [2024-07-24 21:51:51.888178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.795 [2024-07-24 21:51:51.888198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.795 [2024-07-24 21:51:51.910098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.053 [2024-07-24 21:51:51.910879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-24 21:51:51.910902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.053 [2024-07-24 21:51:51.933558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.053 [2024-07-24 21:51:51.934345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-24 21:51:51.934364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.053 [2024-07-24 21:51:51.955503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.053 [2024-07-24 21:51:51.956106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-24 21:51:51.956126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.053 [2024-07-24 21:51:51.977343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.053 [2024-07-24 21:51:51.978020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-24 21:51:51.978039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.053 [2024-07-24 21:51:51.995628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.053 [2024-07-24 21:51:51.996098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-24 21:51:51.996116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.053 [2024-07-24 21:51:52.015427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.053 [2024-07-24 21:51:52.016122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-24 21:51:52.016141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.053 [2024-07-24 21:51:52.036975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.053 [2024-07-24 21:51:52.037531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-24 21:51:52.037549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.053 [2024-07-24 21:51:52.056630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.053 [2024-07-24 21:51:52.057328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-24 21:51:52.057347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.053 [2024-07-24 21:51:52.078153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.053 [2024-07-24 21:51:52.078769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-24 21:51:52.078788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.053 [2024-07-24 21:51:52.099008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.053 [2024-07-24 21:51:52.099405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-24 21:51:52.099424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.053 [2024-07-24 21:51:52.120395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.053 [2024-07-24 21:51:52.121023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-24 21:51:52.121053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.053 [2024-07-24 21:51:52.150759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.053 [2024-07-24 21:51:52.151782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-24 21:51:52.151800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.312 [2024-07-24 21:51:52.172819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.312 [2024-07-24 21:51:52.173345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.312 [2024-07-24 21:51:52.173376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.312 [2024-07-24 21:51:52.193204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.312 [2024-07-24 21:51:52.193969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.312 [2024-07-24 21:51:52.193987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.312 [2024-07-24 21:51:52.222822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.312 [2024-07-24 21:51:52.223338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.312 [2024-07-24 21:51:52.223357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.312 [2024-07-24 21:51:52.242593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.312 [2024-07-24 21:51:52.243286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.312 [2024-07-24 21:51:52.243305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.312 [2024-07-24 21:51:52.262389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.312 [2024-07-24 21:51:52.263150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.312 [2024-07-24 21:51:52.263168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.312 [2024-07-24 21:51:52.283402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.312 [2024-07-24 21:51:52.283857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.312 [2024-07-24 21:51:52.283875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.312 [2024-07-24 21:51:52.313560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.312 [2024-07-24 21:51:52.314025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.312 [2024-07-24 21:51:52.314047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.312 [2024-07-24 21:51:52.334861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.312 [2024-07-24 21:51:52.335553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.312 [2024-07-24 21:51:52.335572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.312 [2024-07-24 21:51:52.360793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.312 [2024-07-24 21:51:52.361751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.313 [2024-07-24 21:51:52.361770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.313 [2024-07-24 21:51:52.385156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.313 [2024-07-24 21:51:52.385847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.313 [2024-07-24 21:51:52.385866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.313 [2024-07-24 21:51:52.406681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.313 [2024-07-24 21:51:52.407146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.313 [2024-07-24 21:51:52.407165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.313 [2024-07-24 21:51:52.427509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.313 [2024-07-24 21:51:52.427974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.313 [2024-07-24 21:51:52.427992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.571 [2024-07-24 21:51:52.449188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.571 [2024-07-24 21:51:52.450024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.571 [2024-07-24 21:51:52.450046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.571 [2024-07-24 21:51:52.470769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.571 [2024-07-24 21:51:52.471621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.571 [2024-07-24 21:51:52.471640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.571 [2024-07-24 21:51:52.492293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.571 [2024-07-24 21:51:52.492818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.571 [2024-07-24 21:51:52.492836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.571 [2024-07-24 21:51:52.511305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.571 [2024-07-24 21:51:52.511985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.571 [2024-07-24 21:51:52.512003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.571 [2024-07-24 21:51:52.530516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.571 [2024-07-24 21:51:52.530906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.571 [2024-07-24 21:51:52.530925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.571 [2024-07-24 21:51:52.550444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.571 [2024-07-24 21:51:52.550907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.571 [2024-07-24 21:51:52.550925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.571 [2024-07-24 21:51:52.570953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.571 [2024-07-24 21:51:52.571545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.572 [2024-07-24 21:51:52.571563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.572 [2024-07-24 21:51:52.592023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.572 [2024-07-24 21:51:52.592933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.572 [2024-07-24 21:51:52.592952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.572 [2024-07-24 21:51:52.615381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.572 [2024-07-24 21:51:52.616072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.572 [2024-07-24 21:51:52.616092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.572 [2024-07-24 21:51:52.637846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.572 [2024-07-24 21:51:52.638467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.572 [2024-07-24 21:51:52.638486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.572 [2024-07-24 21:51:52.665471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.572 [2024-07-24 21:51:52.666369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.572 [2024-07-24 21:51:52.666387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.831 [2024-07-24 21:51:52.688384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.831 [2024-07-24 21:51:52.689246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.831 [2024-07-24 21:51:52.689266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.831 [2024-07-24 21:51:52.717591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.831 [2024-07-24 21:51:52.718292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.831 [2024-07-24 21:51:52.718315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.831 [2024-07-24 21:51:52.749139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.831 [2024-07-24 21:51:52.749707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.831 [2024-07-24 21:51:52.749725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.831 [2024-07-24 21:51:52.778879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.831 [2024-07-24 21:51:52.779661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.831 [2024-07-24 21:51:52.779679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.831 [2024-07-24 21:51:52.806942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.831 [2024-07-24 21:51:52.807864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.831 [2024-07-24 21:51:52.807883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.831 [2024-07-24 21:51:52.829952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.831 [2024-07-24 21:51:52.830428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.831 [2024-07-24 21:51:52.830446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.831 [2024-07-24 21:51:52.858997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.831 [2024-07-24 21:51:52.859700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.831 [2024-07-24 21:51:52.859719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.831 [2024-07-24 21:51:52.880764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.831 [2024-07-24 21:51:52.881380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.831 [2024-07-24 21:51:52.881399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.831 [2024-07-24 21:51:52.901153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.831 [2024-07-24 21:51:52.901756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.831 [2024-07-24 21:51:52.901775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.831 [2024-07-24 21:51:52.921873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.831 [2024-07-24 21:51:52.922536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.831 [2024-07-24 21:51:52.922554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.831 [2024-07-24 21:51:52.943539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:44.831 [2024-07-24 21:51:52.944318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.831 [2024-07-24 21:51:52.944337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.091 [2024-07-24 21:51:52.965514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:45.091 [2024-07-24 21:51:52.966185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.091 [2024-07-24 21:51:52.966204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.091 [2024-07-24 21:51:52.985900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:45.091 [2024-07-24 21:51:52.986652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.091 [2024-07-24 21:51:52.986670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.091 [2024-07-24 21:51:53.006639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:45.091 [2024-07-24 21:51:53.007308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.091 [2024-07-24 21:51:53.007326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.091 [2024-07-24 21:51:53.025410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:45.091 [2024-07-24 21:51:53.026102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.091 [2024-07-24 21:51:53.026121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.091 [2024-07-24 21:51:53.044500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:45.091 [2024-07-24 21:51:53.044878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.091 [2024-07-24 21:51:53.044897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.091 [2024-07-24 21:51:53.065843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:45.091 [2024-07-24 21:51:53.066359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.091 [2024-07-24 21:51:53.066378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.091 [2024-07-24 21:51:53.087369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:45.091 [2024-07-24 21:51:53.088203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.091 [2024-07-24 21:51:53.088221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.091 [2024-07-24 21:51:53.108584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf250a0) with pdu=0x2000190fef90 00:26:45.091 [2024-07-24 21:51:53.109117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.091 [2024-07-24 21:51:53.109136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.091 00:26:45.091 Latency(us) 00:26:45.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.091 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:45.091 nvme0n1 : 2.01 1406.64 175.83 0.00 0.00 11342.21 7579.38 38295.82 00:26:45.091 =================================================================================================================== 00:26:45.091 Total : 1406.64 175.83 0.00 0.00 11342.21 7579.38 38295.82 00:26:45.091 0 00:26:45.091 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:45.091 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:45.091 | .driver_specific 00:26:45.091 | .nvme_error 00:26:45.091 | .status_code 00:26:45.091 | .command_transient_transport_error' 00:26:45.091 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:45.091 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:45.351 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 91 > 0 )) 00:26:45.351 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3204188 00:26:45.351 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3204188 ']' 00:26:45.351 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3204188 00:26:45.351 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:45.351 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:45.351 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3204188 00:26:45.351 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:45.351 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:45.351 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3204188' 00:26:45.351 killing process with pid 3204188 00:26:45.351 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3204188 00:26:45.351 Received shutdown signal, test time was about 2.000000 seconds 00:26:45.351 00:26:45.351 Latency(us) 00:26:45.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.351 =================================================================================================================== 00:26:45.351 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:45.351 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3204188 00:26:45.611 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3202073 00:26:45.611 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3202073 ']' 00:26:45.611 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3202073 00:26:45.611 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:45.611 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:45.611 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3202073 00:26:45.611 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:45.611 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:45.611 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3202073' 00:26:45.611 killing process with pid 3202073 00:26:45.611 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3202073 00:26:45.611 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3202073 00:26:45.871 00:26:45.871 real 0m16.787s 00:26:45.871 user 0m33.268s 00:26:45.871 sys 0m3.333s 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.871 ************************************ 00:26:45.871 END TEST nvmf_digest_error 00:26:45.871 ************************************ 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:45.871 rmmod nvme_tcp 00:26:45.871 rmmod nvme_fabrics 00:26:45.871 rmmod nvme_keyring 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3202073 ']' 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3202073 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3202073 ']' 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3202073 00:26:45.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3202073) - No such process 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3202073 is not found' 00:26:45.871 Process with pid 3202073 is not found 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.871 21:51:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.415 21:51:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:48.415 00:26:48.415 real 0m41.170s 00:26:48.415 user 1m8.086s 00:26:48.415 sys 0m10.713s 00:26:48.415 21:51:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:48.415 21:51:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.415 ************************************ 00:26:48.415 END TEST nvmf_digest 00:26:48.415 ************************************ 00:26:48.415 21:51:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:48.415 21:51:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:48.415 21:51:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:48.415 21:51:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:48.415 21:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:48.415 21:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:48.415 21:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.415 ************************************ 00:26:48.415 START TEST nvmf_bdevperf 00:26:48.416 ************************************ 00:26:48.416 21:51:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:48.416 * Looking for test storage... 00:26:48.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:48.416 21:51:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:53.732 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:53.732 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:53.732 Found net devices under 0000:86:00.0: cvl_0_0 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.732 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:53.733 Found net devices under 0000:86:00.1: cvl_0_1 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:53.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:26:53.733 00:26:53.733 --- 10.0.0.2 ping statistics --- 00:26:53.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.733 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.432 ms 00:26:53.733 00:26:53.733 --- 10.0.0.1 ping statistics --- 00:26:53.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.733 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3208197 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3208197 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3208197 ']' 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:53.733 21:52:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:53.733 [2024-07-24 21:52:01.411899] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:26:53.733 [2024-07-24 21:52:01.411944] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.733 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.733 [2024-07-24 21:52:01.470233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:53.733 [2024-07-24 21:52:01.550958] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.733 [2024-07-24 21:52:01.550992] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.733 [2024-07-24 21:52:01.550999] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.733 [2024-07-24 21:52:01.551005] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.733 [2024-07-24 21:52:01.551011] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.733 [2024-07-24 21:52:01.551079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.733 [2024-07-24 21:52:01.551341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.733 [2024-07-24 21:52:01.551344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.304 [2024-07-24 21:52:02.258800] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.304 Malloc0 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.304 [2024-07-24 21:52:02.317162] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.304 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.304 { 00:26:54.304 "params": { 00:26:54.304 "name": "Nvme$subsystem", 00:26:54.304 "trtype": "$TEST_TRANSPORT", 00:26:54.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.304 "adrfam": "ipv4", 00:26:54.304 "trsvcid": "$NVMF_PORT", 00:26:54.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.304 "hdgst": ${hdgst:-false}, 00:26:54.304 "ddgst": ${ddgst:-false} 00:26:54.304 }, 00:26:54.304 "method": "bdev_nvme_attach_controller" 00:26:54.304 } 00:26:54.304 EOF 00:26:54.305 )") 00:26:54.305 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:54.305 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:54.305 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:54.305 21:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:54.305 "params": { 00:26:54.305 "name": "Nvme1", 00:26:54.305 "trtype": "tcp", 00:26:54.305 "traddr": "10.0.0.2", 00:26:54.305 "adrfam": "ipv4", 00:26:54.305 "trsvcid": "4420", 00:26:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:54.305 "hdgst": false, 00:26:54.305 "ddgst": false 00:26:54.305 }, 00:26:54.305 "method": "bdev_nvme_attach_controller" 00:26:54.305 }' 00:26:54.305 [2024-07-24 21:52:02.357336] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:26:54.305 [2024-07-24 21:52:02.357379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208435 ] 00:26:54.305 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.305 [2024-07-24 21:52:02.410954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.565 [2024-07-24 21:52:02.486740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.825 Running I/O for 1 seconds... 00:26:55.762 00:26:55.762 Latency(us) 00:26:55.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.762 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:55.762 Verification LBA range: start 0x0 length 0x4000 00:26:55.762 Nvme1n1 : 1.01 11355.68 44.36 0.00 0.00 11220.51 1745.25 27810.06 00:26:55.762 =================================================================================================================== 00:26:55.762 Total : 11355.68 44.36 0.00 0.00 11220.51 1745.25 27810.06 00:26:56.022 21:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3208674 00:26:56.022 21:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:56.022 21:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:56.022 21:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:56.022 21:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:56.022 21:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:56.022 21:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.022 21:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.022 { 00:26:56.022 "params": { 00:26:56.022 "name": "Nvme$subsystem", 00:26:56.022 "trtype": "$TEST_TRANSPORT", 00:26:56.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.022 "adrfam": "ipv4", 00:26:56.022 "trsvcid": "$NVMF_PORT", 00:26:56.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.022 "hdgst": ${hdgst:-false}, 00:26:56.022 "ddgst": ${ddgst:-false} 00:26:56.022 }, 00:26:56.022 "method": "bdev_nvme_attach_controller" 00:26:56.022 } 00:26:56.022 EOF 00:26:56.022 )") 00:26:56.022 21:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:56.022 21:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:56.022 21:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:56.022 21:52:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:56.022 "params": { 00:26:56.022 "name": "Nvme1", 00:26:56.022 "trtype": "tcp", 00:26:56.022 "traddr": "10.0.0.2", 00:26:56.022 "adrfam": "ipv4", 00:26:56.022 "trsvcid": "4420", 00:26:56.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:56.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:56.022 "hdgst": false, 00:26:56.022 "ddgst": false 00:26:56.022 }, 00:26:56.022 "method": "bdev_nvme_attach_controller" 00:26:56.022 }' 00:26:56.022 [2024-07-24 21:52:03.965232] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:26:56.022 [2024-07-24 21:52:03.965280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208674 ] 00:26:56.022 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.022 [2024-07-24 21:52:04.020015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.022 [2024-07-24 21:52:04.089763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.282 Running I/O for 15 seconds... 00:26:58.826 21:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3208197 00:26:58.826 21:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:58.826 [2024-07-24 21:52:06.934832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.826 [2024-07-24 21:52:06.934870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.826 [2024-07-24 21:52:06.934887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.826 [2024-07-24 21:52:06.934896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.826 [2024-07-24 21:52:06.934907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.826 [2024-07-24 21:52:06.934915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.826 [2024-07-24 21:52:06.934925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.826 [2024-07-24 21:52:06.934932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.826 [2024-07-24 21:52:06.934942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.826 [2024-07-24 21:52:06.934948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.826 [2024-07-24 21:52:06.934956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.826 [2024-07-24 21:52:06.934963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.826 [2024-07-24 21:52:06.934972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.826 [2024-07-24 21:52:06.934981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.826 [2024-07-24 21:52:06.934989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.826 [2024-07-24 21:52:06.934997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.826 [2024-07-24 21:52:06.935011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.826 [2024-07-24 21:52:06.935018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.826 [2024-07-24 21:52:06.935027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.826 [2024-07-24 21:52:06.935036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.826 [2024-07-24 21:52:06.935048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.826 [2024-07-24 21:52:06.935057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.826 [2024-07-24 21:52:06.935066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.826 [2024-07-24 21:52:06.935072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.826 [2024-07-24 21:52:06.935081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.827 [2024-07-24 21:52:06.935088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.827 [2024-07-24 21:52:06.935106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.827 [2024-07-24 21:52:06.935125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.827 [2024-07-24 21:52:06.935418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.827 [2024-07-24 21:52:06.935437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.827 [2024-07-24 21:52:06.935451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.827 [2024-07-24 21:52:06.935466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.827 [2024-07-24 21:52:06.935480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.827 [2024-07-24 21:52:06.935495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.827 [2024-07-24 21:52:06.935509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.827 [2024-07-24 21:52:06.935674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.827 [2024-07-24 21:52:06.935682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.935991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.935997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.828 [2024-07-24 21:52:06.936235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.828 [2024-07-24 21:52:06.936243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.829 [2024-07-24 21:52:06.936811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.829 [2024-07-24 21:52:06.936819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.830 [2024-07-24 21:52:06.936825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.830 [2024-07-24 21:52:06.936833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f26ee0 is same with the state(5) to be set 00:26:58.830 [2024-07-24 21:52:06.936841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.830 [2024-07-24 21:52:06.936847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.830 [2024-07-24 21:52:06.936853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105840 len:8 PRP1 0x0 PRP2 0x0 00:26:58.830 [2024-07-24 21:52:06.936860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.830 [2024-07-24 21:52:06.936903] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f26ee0 was disconnected and freed. reset controller. 00:26:58.830 [2024-07-24 21:52:06.939899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.830 [2024-07-24 21:52:06.939954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.092 [2024-07-24 21:52:06.940845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.092 [2024-07-24 21:52:06.940863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.092 [2024-07-24 21:52:06.940870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.092 [2024-07-24 21:52:06.941055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.092 [2024-07-24 21:52:06.941233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.092 [2024-07-24 21:52:06.941242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.092 [2024-07-24 21:52:06.941249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.092 [2024-07-24 21:52:06.944090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.092 [2024-07-24 21:52:06.953293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.092 [2024-07-24 21:52:06.953976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.092 [2024-07-24 21:52:06.954020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.092 [2024-07-24 21:52:06.954061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.092 [2024-07-24 21:52:06.954643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.092 [2024-07-24 21:52:06.954976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.092 [2024-07-24 21:52:06.954985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.092 [2024-07-24 21:52:06.954991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.092 [2024-07-24 21:52:06.957690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.092 [2024-07-24 21:52:06.966234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.092 [2024-07-24 21:52:06.966905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.092 [2024-07-24 21:52:06.966950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.092 [2024-07-24 21:52:06.966972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.092 [2024-07-24 21:52:06.967571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.092 [2024-07-24 21:52:06.967911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.092 [2024-07-24 21:52:06.967919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.092 [2024-07-24 21:52:06.967925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.092 [2024-07-24 21:52:06.970615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.092 [2024-07-24 21:52:06.979157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.092 [2024-07-24 21:52:06.979844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.092 [2024-07-24 21:52:06.979886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.092 [2024-07-24 21:52:06.979908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.092 [2024-07-24 21:52:06.980501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.092 [2024-07-24 21:52:06.980935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.092 [2024-07-24 21:52:06.980943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.092 [2024-07-24 21:52:06.980949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.092 [2024-07-24 21:52:06.983634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.093 [2024-07-24 21:52:06.992028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.093 [2024-07-24 21:52:06.992719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.093 [2024-07-24 21:52:06.992762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.093 [2024-07-24 21:52:06.992791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.093 [2024-07-24 21:52:06.993385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.093 [2024-07-24 21:52:06.993948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.093 [2024-07-24 21:52:06.993956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.093 [2024-07-24 21:52:06.993962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.093 [2024-07-24 21:52:06.996646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.093 [2024-07-24 21:52:07.005046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.093 [2024-07-24 21:52:07.005747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.093 [2024-07-24 21:52:07.005789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.093 [2024-07-24 21:52:07.005811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.093 [2024-07-24 21:52:07.006405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.093 [2024-07-24 21:52:07.006988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.093 [2024-07-24 21:52:07.007012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.093 [2024-07-24 21:52:07.007032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.093 [2024-07-24 21:52:07.009841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.093 [2024-07-24 21:52:07.017863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.093 [2024-07-24 21:52:07.018558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.093 [2024-07-24 21:52:07.018602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.093 [2024-07-24 21:52:07.018624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.093 [2024-07-24 21:52:07.019002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.093 [2024-07-24 21:52:07.019181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.093 [2024-07-24 21:52:07.019190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.093 [2024-07-24 21:52:07.019196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.093 [2024-07-24 21:52:07.021878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.093 [2024-07-24 21:52:07.030773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.093 [2024-07-24 21:52:07.031466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.093 [2024-07-24 21:52:07.031509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.093 [2024-07-24 21:52:07.031531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.093 [2024-07-24 21:52:07.031918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.093 [2024-07-24 21:52:07.032178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.093 [2024-07-24 21:52:07.032194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.093 [2024-07-24 21:52:07.032203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.093 [2024-07-24 21:52:07.036263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.093 [2024-07-24 21:52:07.044408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.093 [2024-07-24 21:52:07.045071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.093 [2024-07-24 21:52:07.045114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.093 [2024-07-24 21:52:07.045136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.093 [2024-07-24 21:52:07.045716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.093 [2024-07-24 21:52:07.046234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.093 [2024-07-24 21:52:07.046242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.093 [2024-07-24 21:52:07.046249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.093 [2024-07-24 21:52:07.048969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.093 [2024-07-24 21:52:07.057290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.093 [2024-07-24 21:52:07.057979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.093 [2024-07-24 21:52:07.058020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.093 [2024-07-24 21:52:07.058057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.093 [2024-07-24 21:52:07.058308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.093 [2024-07-24 21:52:07.058481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.093 [2024-07-24 21:52:07.058489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.093 [2024-07-24 21:52:07.058495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.093 [2024-07-24 21:52:07.061187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.093 [2024-07-24 21:52:07.070228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.093 [2024-07-24 21:52:07.070865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.093 [2024-07-24 21:52:07.070907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.093 [2024-07-24 21:52:07.070929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.093 [2024-07-24 21:52:07.071288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.093 [2024-07-24 21:52:07.071462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.093 [2024-07-24 21:52:07.071469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.093 [2024-07-24 21:52:07.071475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.093 [2024-07-24 21:52:07.074155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.093 [2024-07-24 21:52:07.083043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.093 [2024-07-24 21:52:07.083701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.093 [2024-07-24 21:52:07.083742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.093 [2024-07-24 21:52:07.083764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.093 [2024-07-24 21:52:07.084160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.093 [2024-07-24 21:52:07.084333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.093 [2024-07-24 21:52:07.084341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.093 [2024-07-24 21:52:07.084347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.093 [2024-07-24 21:52:07.087032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.093 [2024-07-24 21:52:07.095860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.093 [2024-07-24 21:52:07.096479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.093 [2024-07-24 21:52:07.096520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.093 [2024-07-24 21:52:07.096541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.093 [2024-07-24 21:52:07.097004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.093 [2024-07-24 21:52:07.097182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.093 [2024-07-24 21:52:07.097191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.093 [2024-07-24 21:52:07.097197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.093 [2024-07-24 21:52:07.099876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.093 [2024-07-24 21:52:07.108866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.093 [2024-07-24 21:52:07.109529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.093 [2024-07-24 21:52:07.109571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.093 [2024-07-24 21:52:07.109592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.093 [2024-07-24 21:52:07.109982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.093 [2024-07-24 21:52:07.110159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.093 [2024-07-24 21:52:07.110167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.093 [2024-07-24 21:52:07.110174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.093 [2024-07-24 21:52:07.112857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.093 [2024-07-24 21:52:07.121759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.094 [2024-07-24 21:52:07.122440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.094 [2024-07-24 21:52:07.122483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.094 [2024-07-24 21:52:07.122505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.094 [2024-07-24 21:52:07.123054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.094 [2024-07-24 21:52:07.123227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.094 [2024-07-24 21:52:07.123235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.094 [2024-07-24 21:52:07.123241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.094 [2024-07-24 21:52:07.125916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.094 [2024-07-24 21:52:07.134661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.094 [2024-07-24 21:52:07.135346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.094 [2024-07-24 21:52:07.135388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.094 [2024-07-24 21:52:07.135410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.094 [2024-07-24 21:52:07.135989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.094 [2024-07-24 21:52:07.136319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.094 [2024-07-24 21:52:07.136328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.094 [2024-07-24 21:52:07.136334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.094 [2024-07-24 21:52:07.139013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.094 [2024-07-24 21:52:07.147524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.094 [2024-07-24 21:52:07.148130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.094 [2024-07-24 21:52:07.148146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.094 [2024-07-24 21:52:07.148153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.094 [2024-07-24 21:52:07.148325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.094 [2024-07-24 21:52:07.148497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.094 [2024-07-24 21:52:07.148505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.094 [2024-07-24 21:52:07.148511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.094 [2024-07-24 21:52:07.151220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.094 [2024-07-24 21:52:07.160393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.094 [2024-07-24 21:52:07.161074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.094 [2024-07-24 21:52:07.161115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.094 [2024-07-24 21:52:07.161136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.094 [2024-07-24 21:52:07.161453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.094 [2024-07-24 21:52:07.161615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.094 [2024-07-24 21:52:07.161623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.094 [2024-07-24 21:52:07.161631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.094 [2024-07-24 21:52:07.164327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.094 [2024-07-24 21:52:07.173267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.094 [2024-07-24 21:52:07.173952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.094 [2024-07-24 21:52:07.173993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.094 [2024-07-24 21:52:07.174015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.094 [2024-07-24 21:52:07.174422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.094 [2024-07-24 21:52:07.174595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.094 [2024-07-24 21:52:07.174602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.094 [2024-07-24 21:52:07.174608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.094 [2024-07-24 21:52:07.177294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.094 [2024-07-24 21:52:07.186173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.094 [2024-07-24 21:52:07.186874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.094 [2024-07-24 21:52:07.186915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.094 [2024-07-24 21:52:07.186936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.094 [2024-07-24 21:52:07.187349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.094 [2024-07-24 21:52:07.187527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.094 [2024-07-24 21:52:07.187535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.094 [2024-07-24 21:52:07.187542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.094 [2024-07-24 21:52:07.190397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.094 [2024-07-24 21:52:07.199299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.094 [2024-07-24 21:52:07.199939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.094 [2024-07-24 21:52:07.199981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.094 [2024-07-24 21:52:07.200003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.094 [2024-07-24 21:52:07.200525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.094 [2024-07-24 21:52:07.200704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.094 [2024-07-24 21:52:07.200712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.094 [2024-07-24 21:52:07.200718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.094 [2024-07-24 21:52:07.203555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.355 [2024-07-24 21:52:07.212416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.355 [2024-07-24 21:52:07.213083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.355 [2024-07-24 21:52:07.213124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.355 [2024-07-24 21:52:07.213145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.355 [2024-07-24 21:52:07.213539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.355 [2024-07-24 21:52:07.213717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.355 [2024-07-24 21:52:07.213725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.355 [2024-07-24 21:52:07.213732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.355 [2024-07-24 21:52:07.216558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.355 [2024-07-24 21:52:07.225335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.355 [2024-07-24 21:52:07.225998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.355 [2024-07-24 21:52:07.226040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.355 [2024-07-24 21:52:07.226078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.355 [2024-07-24 21:52:07.226478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.355 [2024-07-24 21:52:07.226650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.355 [2024-07-24 21:52:07.226658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.355 [2024-07-24 21:52:07.226664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.355 [2024-07-24 21:52:07.229347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.355 [2024-07-24 21:52:07.238235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.355 [2024-07-24 21:52:07.238924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.355 [2024-07-24 21:52:07.238941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.355 [2024-07-24 21:52:07.238948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.355 [2024-07-24 21:52:07.239126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.355 [2024-07-24 21:52:07.239299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.355 [2024-07-24 21:52:07.239307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.355 [2024-07-24 21:52:07.239312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.355 [2024-07-24 21:52:07.241994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.355 [2024-07-24 21:52:07.251139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.355 [2024-07-24 21:52:07.251805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.355 [2024-07-24 21:52:07.251847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.355 [2024-07-24 21:52:07.251869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.355 [2024-07-24 21:52:07.252407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.355 [2024-07-24 21:52:07.252580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.355 [2024-07-24 21:52:07.252588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.355 [2024-07-24 21:52:07.252594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.355 [2024-07-24 21:52:07.255241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.355 [2024-07-24 21:52:07.264045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.355 [2024-07-24 21:52:07.264702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.355 [2024-07-24 21:52:07.264743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.355 [2024-07-24 21:52:07.264764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.355 [2024-07-24 21:52:07.265300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.355 [2024-07-24 21:52:07.265472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.355 [2024-07-24 21:52:07.265480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.355 [2024-07-24 21:52:07.265486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.355 [2024-07-24 21:52:07.269552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.355 [2024-07-24 21:52:07.277485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.355 [2024-07-24 21:52:07.277950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.355 [2024-07-24 21:52:07.278001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.355 [2024-07-24 21:52:07.278022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.355 [2024-07-24 21:52:07.278544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.355 [2024-07-24 21:52:07.278716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.355 [2024-07-24 21:52:07.278724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.355 [2024-07-24 21:52:07.278731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.355 [2024-07-24 21:52:07.281459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.355 [2024-07-24 21:52:07.290404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.355 [2024-07-24 21:52:07.291117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.355 [2024-07-24 21:52:07.291160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.355 [2024-07-24 21:52:07.291182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.355 [2024-07-24 21:52:07.291706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.355 [2024-07-24 21:52:07.291878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.355 [2024-07-24 21:52:07.291886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.355 [2024-07-24 21:52:07.291895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.355 [2024-07-24 21:52:07.294584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.355 [2024-07-24 21:52:07.303378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.355 [2024-07-24 21:52:07.304061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.355 [2024-07-24 21:52:07.304104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.355 [2024-07-24 21:52:07.304125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.355 [2024-07-24 21:52:07.304557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.355 [2024-07-24 21:52:07.304729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.356 [2024-07-24 21:52:07.304737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.356 [2024-07-24 21:52:07.304742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.356 [2024-07-24 21:52:07.307515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.356 [2024-07-24 21:52:07.316309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.356 [2024-07-24 21:52:07.316987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.356 [2024-07-24 21:52:07.317029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.356 [2024-07-24 21:52:07.317065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.356 [2024-07-24 21:52:07.317568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.356 [2024-07-24 21:52:07.317741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.356 [2024-07-24 21:52:07.317749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.356 [2024-07-24 21:52:07.317755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.356 [2024-07-24 21:52:07.320436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.356 [2024-07-24 21:52:07.329141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.356 [2024-07-24 21:52:07.329797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.356 [2024-07-24 21:52:07.329840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.356 [2024-07-24 21:52:07.329861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.356 [2024-07-24 21:52:07.330454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.356 [2024-07-24 21:52:07.330898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.356 [2024-07-24 21:52:07.330905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.356 [2024-07-24 21:52:07.330912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.356 [2024-07-24 21:52:07.333592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.356 [2024-07-24 21:52:07.341979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.356 [2024-07-24 21:52:07.342667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.356 [2024-07-24 21:52:07.342717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.356 [2024-07-24 21:52:07.342739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.356 [2024-07-24 21:52:07.343331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.356 [2024-07-24 21:52:07.343809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.356 [2024-07-24 21:52:07.343817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.356 [2024-07-24 21:52:07.343824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.356 [2024-07-24 21:52:07.346512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.356 [2024-07-24 21:52:07.354999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.356 [2024-07-24 21:52:07.355689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.356 [2024-07-24 21:52:07.355732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.356 [2024-07-24 21:52:07.355753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.356 [2024-07-24 21:52:07.356342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.356 [2024-07-24 21:52:07.356875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.356 [2024-07-24 21:52:07.356883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.356 [2024-07-24 21:52:07.356889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.356 [2024-07-24 21:52:07.359575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.356 [2024-07-24 21:52:07.367845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.356 [2024-07-24 21:52:07.368453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.356 [2024-07-24 21:52:07.368470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.356 [2024-07-24 21:52:07.368476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.356 [2024-07-24 21:52:07.368649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.356 [2024-07-24 21:52:07.368821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.356 [2024-07-24 21:52:07.368829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.356 [2024-07-24 21:52:07.368835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.356 [2024-07-24 21:52:07.371527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.356 [2024-07-24 21:52:07.380748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.356 [2024-07-24 21:52:07.381394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.356 [2024-07-24 21:52:07.381410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.356 [2024-07-24 21:52:07.381416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.356 [2024-07-24 21:52:07.381588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.356 [2024-07-24 21:52:07.381765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.356 [2024-07-24 21:52:07.381773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.356 [2024-07-24 21:52:07.381779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.356 [2024-07-24 21:52:07.384472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.356 [2024-07-24 21:52:07.393664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.356 [2024-07-24 21:52:07.394329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.356 [2024-07-24 21:52:07.394370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.356 [2024-07-24 21:52:07.394391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.356 [2024-07-24 21:52:07.394644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.356 [2024-07-24 21:52:07.394807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.356 [2024-07-24 21:52:07.394815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.356 [2024-07-24 21:52:07.394821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.356 [2024-07-24 21:52:07.397518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.356 [2024-07-24 21:52:07.406572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.356 [2024-07-24 21:52:07.407214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.356 [2024-07-24 21:52:07.407257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.356 [2024-07-24 21:52:07.407279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.356 [2024-07-24 21:52:07.407858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.356 [2024-07-24 21:52:07.408197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.356 [2024-07-24 21:52:07.408205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.356 [2024-07-24 21:52:07.408211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.356 [2024-07-24 21:52:07.410892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.356 [2024-07-24 21:52:07.419467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.356 [2024-07-24 21:52:07.420146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.356 [2024-07-24 21:52:07.420189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.356 [2024-07-24 21:52:07.420211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.356 [2024-07-24 21:52:07.420790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.356 [2024-07-24 21:52:07.421156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.356 [2024-07-24 21:52:07.421164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.356 [2024-07-24 21:52:07.421170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.356 [2024-07-24 21:52:07.423855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.356 [2024-07-24 21:52:07.432362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.356 [2024-07-24 21:52:07.433057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.357 [2024-07-24 21:52:07.433102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.357 [2024-07-24 21:52:07.433123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.357 [2024-07-24 21:52:07.433703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.357 [2024-07-24 21:52:07.434296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.357 [2024-07-24 21:52:07.434329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.357 [2024-07-24 21:52:07.434335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.357 [2024-07-24 21:52:07.437019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.357 [2024-07-24 21:52:07.445458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.357 [2024-07-24 21:52:07.446147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.357 [2024-07-24 21:52:07.446162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.357 [2024-07-24 21:52:07.446169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.357 [2024-07-24 21:52:07.446340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.357 [2024-07-24 21:52:07.446513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.357 [2024-07-24 21:52:07.446521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.357 [2024-07-24 21:52:07.446527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.357 [2024-07-24 21:52:07.449301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.357 [2024-07-24 21:52:07.458377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.357 [2024-07-24 21:52:07.459063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.357 [2024-07-24 21:52:07.459106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.357 [2024-07-24 21:52:07.459128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.357 [2024-07-24 21:52:07.459716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.357 [2024-07-24 21:52:07.459879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.357 [2024-07-24 21:52:07.459886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.357 [2024-07-24 21:52:07.459892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.357 [2024-07-24 21:52:07.462627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.618 [2024-07-24 21:52:07.471556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.618 [2024-07-24 21:52:07.472229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-07-24 21:52:07.472275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.618 [2024-07-24 21:52:07.472305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.618 [2024-07-24 21:52:07.472885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.618 [2024-07-24 21:52:07.473191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.618 [2024-07-24 21:52:07.473199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.618 [2024-07-24 21:52:07.473205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.618 [2024-07-24 21:52:07.476030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.618 [2024-07-24 21:52:07.484376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.618 [2024-07-24 21:52:07.485067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-07-24 21:52:07.485083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.618 [2024-07-24 21:52:07.485090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.618 [2024-07-24 21:52:07.485272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.618 [2024-07-24 21:52:07.485434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.618 [2024-07-24 21:52:07.485442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.618 [2024-07-24 21:52:07.485447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.618 [2024-07-24 21:52:07.488141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.618 [2024-07-24 21:52:07.497402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.618 [2024-07-24 21:52:07.498017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-07-24 21:52:07.498033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.618 [2024-07-24 21:52:07.498040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.618 [2024-07-24 21:52:07.498217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.618 [2024-07-24 21:52:07.498389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.618 [2024-07-24 21:52:07.498398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.618 [2024-07-24 21:52:07.498404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.618 [2024-07-24 21:52:07.501125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.618 [2024-07-24 21:52:07.510398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.618 [2024-07-24 21:52:07.510918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-07-24 21:52:07.510934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.618 [2024-07-24 21:52:07.510941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.618 [2024-07-24 21:52:07.511119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.618 [2024-07-24 21:52:07.511291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.618 [2024-07-24 21:52:07.511302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.618 [2024-07-24 21:52:07.511308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.618 [2024-07-24 21:52:07.513997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.618 [2024-07-24 21:52:07.523334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.618 [2024-07-24 21:52:07.524018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.618 [2024-07-24 21:52:07.524034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.618 [2024-07-24 21:52:07.524041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.618 [2024-07-24 21:52:07.524219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.618 [2024-07-24 21:52:07.524391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.618 [2024-07-24 21:52:07.524399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.618 [2024-07-24 21:52:07.524405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.619 [2024-07-24 21:52:07.527098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.619 [2024-07-24 21:52:07.536295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.619 [2024-07-24 21:52:07.536923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-07-24 21:52:07.536938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.619 [2024-07-24 21:52:07.536945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.619 [2024-07-24 21:52:07.537125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.619 [2024-07-24 21:52:07.537297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.619 [2024-07-24 21:52:07.537305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.619 [2024-07-24 21:52:07.537311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.619 [2024-07-24 21:52:07.540000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.619 [2024-07-24 21:52:07.549216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.619 [2024-07-24 21:52:07.549875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-07-24 21:52:07.549917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.619 [2024-07-24 21:52:07.549938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.619 [2024-07-24 21:52:07.550331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.619 [2024-07-24 21:52:07.550504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.619 [2024-07-24 21:52:07.550529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.619 [2024-07-24 21:52:07.550535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.619 [2024-07-24 21:52:07.553378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.619 [2024-07-24 21:52:07.562313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.619 [2024-07-24 21:52:07.563012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-07-24 21:52:07.563066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.619 [2024-07-24 21:52:07.563088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.619 [2024-07-24 21:52:07.563381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.619 [2024-07-24 21:52:07.563553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.619 [2024-07-24 21:52:07.563561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.619 [2024-07-24 21:52:07.563567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.619 [2024-07-24 21:52:07.566278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.619 [2024-07-24 21:52:07.575290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.619 [2024-07-24 21:52:07.576004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-07-24 21:52:07.576058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.619 [2024-07-24 21:52:07.576081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.619 [2024-07-24 21:52:07.576489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.619 [2024-07-24 21:52:07.576662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.619 [2024-07-24 21:52:07.576670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.619 [2024-07-24 21:52:07.576676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.619 [2024-07-24 21:52:07.579367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.619 [2024-07-24 21:52:07.588282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.619 [2024-07-24 21:52:07.588894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-07-24 21:52:07.588935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.619 [2024-07-24 21:52:07.588956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.619 [2024-07-24 21:52:07.589371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.619 [2024-07-24 21:52:07.589549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.619 [2024-07-24 21:52:07.589557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.619 [2024-07-24 21:52:07.589563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.619 [2024-07-24 21:52:07.592274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.619 [2024-07-24 21:52:07.601164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.619 [2024-07-24 21:52:07.601849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-07-24 21:52:07.601890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.619 [2024-07-24 21:52:07.601911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.619 [2024-07-24 21:52:07.602267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.619 [2024-07-24 21:52:07.602441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.619 [2024-07-24 21:52:07.602449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.619 [2024-07-24 21:52:07.602455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.619 [2024-07-24 21:52:07.605206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.619 [2024-07-24 21:52:07.614130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.619 [2024-07-24 21:52:07.614741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-07-24 21:52:07.614757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.619 [2024-07-24 21:52:07.614764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.619 [2024-07-24 21:52:07.614936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.619 [2024-07-24 21:52:07.615116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.619 [2024-07-24 21:52:07.615125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.619 [2024-07-24 21:52:07.615131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.619 [2024-07-24 21:52:07.617890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.619 [2024-07-24 21:52:07.627049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.619 [2024-07-24 21:52:07.627669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-07-24 21:52:07.627711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.619 [2024-07-24 21:52:07.627732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.619 [2024-07-24 21:52:07.628240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.619 [2024-07-24 21:52:07.628413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.619 [2024-07-24 21:52:07.628421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.619 [2024-07-24 21:52:07.628427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.619 [2024-07-24 21:52:07.632294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.619 [2024-07-24 21:52:07.640732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.619 [2024-07-24 21:52:07.641421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-07-24 21:52:07.641438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.619 [2024-07-24 21:52:07.641445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.619 [2024-07-24 21:52:07.641617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.619 [2024-07-24 21:52:07.641790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.619 [2024-07-24 21:52:07.641798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.619 [2024-07-24 21:52:07.641807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.619 [2024-07-24 21:52:07.644575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.619 [2024-07-24 21:52:07.653703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.619 [2024-07-24 21:52:07.654428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.619 [2024-07-24 21:52:07.654471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.619 [2024-07-24 21:52:07.654492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.619 [2024-07-24 21:52:07.654728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.619 [2024-07-24 21:52:07.654891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.619 [2024-07-24 21:52:07.654898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.620 [2024-07-24 21:52:07.654904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.620 [2024-07-24 21:52:07.657652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.620 [2024-07-24 21:52:07.666796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.620 [2024-07-24 21:52:07.667457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-07-24 21:52:07.667473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.620 [2024-07-24 21:52:07.667480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.620 [2024-07-24 21:52:07.667641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.620 [2024-07-24 21:52:07.667804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.620 [2024-07-24 21:52:07.667812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.620 [2024-07-24 21:52:07.667818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.620 [2024-07-24 21:52:07.670559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.620 [2024-07-24 21:52:07.679713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.620 [2024-07-24 21:52:07.680420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-07-24 21:52:07.680462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.620 [2024-07-24 21:52:07.680484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.620 [2024-07-24 21:52:07.681075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.620 [2024-07-24 21:52:07.681663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.620 [2024-07-24 21:52:07.681672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.620 [2024-07-24 21:52:07.681678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.620 [2024-07-24 21:52:07.684453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.620 [2024-07-24 21:52:07.692737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.620 [2024-07-24 21:52:07.693369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-07-24 21:52:07.693384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.620 [2024-07-24 21:52:07.693390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.620 [2024-07-24 21:52:07.693552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.620 [2024-07-24 21:52:07.693715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.620 [2024-07-24 21:52:07.693723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.620 [2024-07-24 21:52:07.693728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.620 [2024-07-24 21:52:07.696538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.620 [2024-07-24 21:52:07.705879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.620 [2024-07-24 21:52:07.706484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-07-24 21:52:07.706526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.620 [2024-07-24 21:52:07.706547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.620 [2024-07-24 21:52:07.707141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.620 [2024-07-24 21:52:07.707648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.620 [2024-07-24 21:52:07.707656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.620 [2024-07-24 21:52:07.707661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.620 [2024-07-24 21:52:07.710379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.620 [2024-07-24 21:52:07.718746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.620 [2024-07-24 21:52:07.719355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-07-24 21:52:07.719397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.620 [2024-07-24 21:52:07.719419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.620 [2024-07-24 21:52:07.719740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.620 [2024-07-24 21:52:07.719917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.620 [2024-07-24 21:52:07.719925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.620 [2024-07-24 21:52:07.719932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.620 [2024-07-24 21:52:07.722645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.620 [2024-07-24 21:52:07.731742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.620 [2024-07-24 21:52:07.732431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.620 [2024-07-24 21:52:07.732448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.620 [2024-07-24 21:52:07.732455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.620 [2024-07-24 21:52:07.732627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.620 [2024-07-24 21:52:07.732803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.620 [2024-07-24 21:52:07.732811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.620 [2024-07-24 21:52:07.732818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.882 [2024-07-24 21:52:07.735633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.882 [2024-07-24 21:52:07.744651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.882 [2024-07-24 21:52:07.745351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.882 [2024-07-24 21:52:07.745395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.882 [2024-07-24 21:52:07.745416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.882 [2024-07-24 21:52:07.745764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.882 [2024-07-24 21:52:07.745937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.882 [2024-07-24 21:52:07.745945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.882 [2024-07-24 21:52:07.745951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.882 [2024-07-24 21:52:07.748635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.882 [2024-07-24 21:52:07.757491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.882 [2024-07-24 21:52:07.758196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.882 [2024-07-24 21:52:07.758241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.882 [2024-07-24 21:52:07.758263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.882 [2024-07-24 21:52:07.758645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.882 [2024-07-24 21:52:07.758817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.882 [2024-07-24 21:52:07.758825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.882 [2024-07-24 21:52:07.758831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.882 [2024-07-24 21:52:07.761519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.882 [2024-07-24 21:52:07.770487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.882 [2024-07-24 21:52:07.771489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.882 [2024-07-24 21:52:07.771511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.882 [2024-07-24 21:52:07.771519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.882 [2024-07-24 21:52:07.771699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.882 [2024-07-24 21:52:07.771872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.882 [2024-07-24 21:52:07.771881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.882 [2024-07-24 21:52:07.771887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.882 [2024-07-24 21:52:07.774717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.882 [2024-07-24 21:52:07.783589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.882 [2024-07-24 21:52:07.784245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.882 [2024-07-24 21:52:07.784262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.882 [2024-07-24 21:52:07.784269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.882 [2024-07-24 21:52:07.784447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.882 [2024-07-24 21:52:07.784624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.882 [2024-07-24 21:52:07.784632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.882 [2024-07-24 21:52:07.784639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.882 [2024-07-24 21:52:07.787475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.882 [2024-07-24 21:52:07.796705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.882 [2024-07-24 21:52:07.797436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.882 [2024-07-24 21:52:07.797481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.882 [2024-07-24 21:52:07.797503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.882 [2024-07-24 21:52:07.797922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.882 [2024-07-24 21:52:07.798106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.882 [2024-07-24 21:52:07.798115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.882 [2024-07-24 21:52:07.798121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.882 [2024-07-24 21:52:07.800954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.882 [2024-07-24 21:52:07.809835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.882 [2024-07-24 21:52:07.810432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.882 [2024-07-24 21:52:07.810449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.882 [2024-07-24 21:52:07.810456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.882 [2024-07-24 21:52:07.810633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.882 [2024-07-24 21:52:07.810810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.882 [2024-07-24 21:52:07.810819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.882 [2024-07-24 21:52:07.810825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.882 [2024-07-24 21:52:07.813666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.882 [2024-07-24 21:52:07.822840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.882 [2024-07-24 21:52:07.823447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.882 [2024-07-24 21:52:07.823497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.882 [2024-07-24 21:52:07.823520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.882 [2024-07-24 21:52:07.823963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.882 [2024-07-24 21:52:07.824140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.882 [2024-07-24 21:52:07.824149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.882 [2024-07-24 21:52:07.824155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.882 [2024-07-24 21:52:07.826901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.883 [2024-07-24 21:52:07.835779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.883 [2024-07-24 21:52:07.836401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.883 [2024-07-24 21:52:07.836444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.883 [2024-07-24 21:52:07.836465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.883 [2024-07-24 21:52:07.836977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.883 [2024-07-24 21:52:07.837153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.883 [2024-07-24 21:52:07.837161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.883 [2024-07-24 21:52:07.837167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.883 [2024-07-24 21:52:07.839896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.883 [2024-07-24 21:52:07.848709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.883 [2024-07-24 21:52:07.849519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.883 [2024-07-24 21:52:07.849564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.883 [2024-07-24 21:52:07.849585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.883 [2024-07-24 21:52:07.850094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.883 [2024-07-24 21:52:07.850267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.883 [2024-07-24 21:52:07.850275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.883 [2024-07-24 21:52:07.850281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.883 [2024-07-24 21:52:07.852964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.883 [2024-07-24 21:52:07.861640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.883 [2024-07-24 21:52:07.862238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.883 [2024-07-24 21:52:07.862282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.883 [2024-07-24 21:52:07.862303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.883 [2024-07-24 21:52:07.862874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.883 [2024-07-24 21:52:07.863054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.883 [2024-07-24 21:52:07.863062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.883 [2024-07-24 21:52:07.863069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.883 [2024-07-24 21:52:07.865793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.883 [2024-07-24 21:52:07.874630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.883 [2024-07-24 21:52:07.875238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.883 [2024-07-24 21:52:07.875255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.883 [2024-07-24 21:52:07.875261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.883 [2024-07-24 21:52:07.875438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.883 [2024-07-24 21:52:07.875602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.883 [2024-07-24 21:52:07.875609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.883 [2024-07-24 21:52:07.875615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.883 [2024-07-24 21:52:07.878319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.883 [2024-07-24 21:52:07.887550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.883 [2024-07-24 21:52:07.888149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.883 [2024-07-24 21:52:07.888192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.883 [2024-07-24 21:52:07.888213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.883 [2024-07-24 21:52:07.888793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.883 [2024-07-24 21:52:07.889282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.883 [2024-07-24 21:52:07.889290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.883 [2024-07-24 21:52:07.889296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.883 [2024-07-24 21:52:07.892069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.883 [2024-07-24 21:52:07.900502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.883 [2024-07-24 21:52:07.901116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.883 [2024-07-24 21:52:07.901147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.883 [2024-07-24 21:52:07.901170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.883 [2024-07-24 21:52:07.901749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.883 [2024-07-24 21:52:07.901948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.883 [2024-07-24 21:52:07.901956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.883 [2024-07-24 21:52:07.901962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.883 [2024-07-24 21:52:07.904699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.883 [2024-07-24 21:52:07.913511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.883 [2024-07-24 21:52:07.914071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.883 [2024-07-24 21:52:07.914088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.883 [2024-07-24 21:52:07.914095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.883 [2024-07-24 21:52:07.914267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.883 [2024-07-24 21:52:07.914439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.883 [2024-07-24 21:52:07.914447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.883 [2024-07-24 21:52:07.914453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.883 [2024-07-24 21:52:07.917222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.883 [2024-07-24 21:52:07.926547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.883 [2024-07-24 21:52:07.927194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.883 [2024-07-24 21:52:07.927237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.883 [2024-07-24 21:52:07.927258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.883 [2024-07-24 21:52:07.927837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.883 [2024-07-24 21:52:07.928117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.883 [2024-07-24 21:52:07.928125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.883 [2024-07-24 21:52:07.928132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.883 [2024-07-24 21:52:07.930812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.883 [2024-07-24 21:52:07.939500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.883 [2024-07-24 21:52:07.940199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.883 [2024-07-24 21:52:07.940241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.883 [2024-07-24 21:52:07.940262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.883 [2024-07-24 21:52:07.940839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.883 [2024-07-24 21:52:07.941132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.883 [2024-07-24 21:52:07.941140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.883 [2024-07-24 21:52:07.941146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.883 [2024-07-24 21:52:07.943895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.883 [2024-07-24 21:52:07.952653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.883 [2024-07-24 21:52:07.953263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.883 [2024-07-24 21:52:07.953305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.883 [2024-07-24 21:52:07.953334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.883 [2024-07-24 21:52:07.953915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.883 [2024-07-24 21:52:07.954400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.883 [2024-07-24 21:52:07.954409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.883 [2024-07-24 21:52:07.954415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.883 [2024-07-24 21:52:07.957146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.884 [2024-07-24 21:52:07.965449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.884 [2024-07-24 21:52:07.966062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.884 [2024-07-24 21:52:07.966105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.884 [2024-07-24 21:52:07.966127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.884 [2024-07-24 21:52:07.966705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.884 [2024-07-24 21:52:07.967308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.884 [2024-07-24 21:52:07.967317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.884 [2024-07-24 21:52:07.967323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.884 [2024-07-24 21:52:07.970048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.884 [2024-07-24 21:52:07.978321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.884 [2024-07-24 21:52:07.978980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.884 [2024-07-24 21:52:07.978995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.884 [2024-07-24 21:52:07.979001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.884 [2024-07-24 21:52:07.979191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.884 [2024-07-24 21:52:07.979362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.884 [2024-07-24 21:52:07.979370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.884 [2024-07-24 21:52:07.979376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.884 [2024-07-24 21:52:07.982158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.884 [2024-07-24 21:52:07.991229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.884 [2024-07-24 21:52:07.991876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.884 [2024-07-24 21:52:07.991919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:26:59.884 [2024-07-24 21:52:07.991941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:26:59.884 [2024-07-24 21:52:07.992312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:26:59.884 [2024-07-24 21:52:07.992566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.884 [2024-07-24 21:52:07.992581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.884 [2024-07-24 21:52:07.992590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.884 [2024-07-24 21:52:07.996652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.146 [2024-07-24 21:52:08.004744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.146 [2024-07-24 21:52:08.005433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.146 [2024-07-24 21:52:08.005477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.146 [2024-07-24 21:52:08.005498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.146 [2024-07-24 21:52:08.006092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.146 [2024-07-24 21:52:08.006298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.146 [2024-07-24 21:52:08.006306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.146 [2024-07-24 21:52:08.006312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.146 [2024-07-24 21:52:08.009087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.146 [2024-07-24 21:52:08.017554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.146 [2024-07-24 21:52:08.018235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.146 [2024-07-24 21:52:08.018278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.146 [2024-07-24 21:52:08.018299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.146 [2024-07-24 21:52:08.018570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.146 [2024-07-24 21:52:08.018733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.146 [2024-07-24 21:52:08.018740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.146 [2024-07-24 21:52:08.018746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.146 [2024-07-24 21:52:08.021437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.146 [2024-07-24 21:52:08.030363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.146 [2024-07-24 21:52:08.031008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.146 [2024-07-24 21:52:08.031022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.146 [2024-07-24 21:52:08.031028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.146 [2024-07-24 21:52:08.031218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.146 [2024-07-24 21:52:08.031391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.146 [2024-07-24 21:52:08.031399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.146 [2024-07-24 21:52:08.031405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.146 [2024-07-24 21:52:08.034090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.146 [2024-07-24 21:52:08.043232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.146 [2024-07-24 21:52:08.043851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.146 [2024-07-24 21:52:08.043891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.146 [2024-07-24 21:52:08.043912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.146 [2024-07-24 21:52:08.044507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.146 [2024-07-24 21:52:08.044956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.146 [2024-07-24 21:52:08.044964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.146 [2024-07-24 21:52:08.044970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.146 [2024-07-24 21:52:08.047652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.146 [2024-07-24 21:52:08.056071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.146 [2024-07-24 21:52:08.056763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.146 [2024-07-24 21:52:08.056806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.146 [2024-07-24 21:52:08.056828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.146 [2024-07-24 21:52:08.057339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.146 [2024-07-24 21:52:08.057512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.146 [2024-07-24 21:52:08.057520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.146 [2024-07-24 21:52:08.057525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.146 [2024-07-24 21:52:08.060261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.146 [2024-07-24 21:52:08.068919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.146 [2024-07-24 21:52:08.069611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.146 [2024-07-24 21:52:08.069654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.146 [2024-07-24 21:52:08.069676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.146 [2024-07-24 21:52:08.070247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.146 [2024-07-24 21:52:08.070419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.146 [2024-07-24 21:52:08.070427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.146 [2024-07-24 21:52:08.070433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.146 [2024-07-24 21:52:08.073117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.146 [2024-07-24 21:52:08.081934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.146 [2024-07-24 21:52:08.082624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.146 [2024-07-24 21:52:08.082667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.146 [2024-07-24 21:52:08.082687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.146 [2024-07-24 21:52:08.083288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.146 [2024-07-24 21:52:08.083630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.146 [2024-07-24 21:52:08.083639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.146 [2024-07-24 21:52:08.083645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.146 [2024-07-24 21:52:08.087665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.146 [2024-07-24 21:52:08.095416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.146 [2024-07-24 21:52:08.096124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.146 [2024-07-24 21:52:08.096167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.146 [2024-07-24 21:52:08.096188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.146 [2024-07-24 21:52:08.096568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.146 [2024-07-24 21:52:08.096741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.147 [2024-07-24 21:52:08.096748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.147 [2024-07-24 21:52:08.096755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.147 [2024-07-24 21:52:08.099485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.147 [2024-07-24 21:52:08.108293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.147 [2024-07-24 21:52:08.109010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.147 [2024-07-24 21:52:08.109063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.147 [2024-07-24 21:52:08.109085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.147 [2024-07-24 21:52:08.109421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.147 [2024-07-24 21:52:08.109592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.147 [2024-07-24 21:52:08.109600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.147 [2024-07-24 21:52:08.109606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.147 [2024-07-24 21:52:08.112310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.147 [2024-07-24 21:52:08.121097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.147 [2024-07-24 21:52:08.121714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.147 [2024-07-24 21:52:08.121730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.147 [2024-07-24 21:52:08.121738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.147 [2024-07-24 21:52:08.121914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.147 [2024-07-24 21:52:08.122093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.147 [2024-07-24 21:52:08.122102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.147 [2024-07-24 21:52:08.122111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.147 [2024-07-24 21:52:08.124791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.147 [2024-07-24 21:52:08.134018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.147 [2024-07-24 21:52:08.134673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.147 [2024-07-24 21:52:08.134715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.147 [2024-07-24 21:52:08.134737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.147 [2024-07-24 21:52:08.135329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.147 [2024-07-24 21:52:08.135771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.147 [2024-07-24 21:52:08.135779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.147 [2024-07-24 21:52:08.135785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.147 [2024-07-24 21:52:08.138422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.147 [2024-07-24 21:52:08.146912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.147 [2024-07-24 21:52:08.147610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.147 [2024-07-24 21:52:08.147626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.147 [2024-07-24 21:52:08.147633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.147 [2024-07-24 21:52:08.147804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.147 [2024-07-24 21:52:08.147976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.147 [2024-07-24 21:52:08.147983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.147 [2024-07-24 21:52:08.147989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.147 [2024-07-24 21:52:08.150677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.147 [2024-07-24 21:52:08.159770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.147 [2024-07-24 21:52:08.160473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.147 [2024-07-24 21:52:08.160517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.147 [2024-07-24 21:52:08.160538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.147 [2024-07-24 21:52:08.160947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.147 [2024-07-24 21:52:08.161123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.147 [2024-07-24 21:52:08.161131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.147 [2024-07-24 21:52:08.161138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.147 [2024-07-24 21:52:08.163815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.147 [2024-07-24 21:52:08.172701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.147 [2024-07-24 21:52:08.173379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.147 [2024-07-24 21:52:08.173397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.147 [2024-07-24 21:52:08.173403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.147 [2024-07-24 21:52:08.173565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.147 [2024-07-24 21:52:08.173727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.147 [2024-07-24 21:52:08.173734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.147 [2024-07-24 21:52:08.173740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.147 [2024-07-24 21:52:08.176440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.147 [2024-07-24 21:52:08.185636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.147 [2024-07-24 21:52:08.186289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.147 [2024-07-24 21:52:08.186331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.147 [2024-07-24 21:52:08.186352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.147 [2024-07-24 21:52:08.186794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.147 [2024-07-24 21:52:08.186957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.147 [2024-07-24 21:52:08.186964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.147 [2024-07-24 21:52:08.186970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.147 [2024-07-24 21:52:08.189662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.147 [2024-07-24 21:52:08.198453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.147 [2024-07-24 21:52:08.199171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.147 [2024-07-24 21:52:08.199216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.147 [2024-07-24 21:52:08.199237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.147 [2024-07-24 21:52:08.199818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.147 [2024-07-24 21:52:08.200059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.147 [2024-07-24 21:52:08.200068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.147 [2024-07-24 21:52:08.200075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.147 [2024-07-24 21:52:08.202908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.147 [2024-07-24 21:52:08.211613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.147 [2024-07-24 21:52:08.212257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.147 [2024-07-24 21:52:08.212304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.147 [2024-07-24 21:52:08.212324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.147 [2024-07-24 21:52:08.212697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.147 [2024-07-24 21:52:08.212872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.147 [2024-07-24 21:52:08.212880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.147 [2024-07-24 21:52:08.212886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.147 [2024-07-24 21:52:08.215616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.147 [2024-07-24 21:52:08.224452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.147 [2024-07-24 21:52:08.225112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.147 [2024-07-24 21:52:08.225154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.147 [2024-07-24 21:52:08.225176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.147 [2024-07-24 21:52:08.225755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.147 [2024-07-24 21:52:08.226159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.147 [2024-07-24 21:52:08.226168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.148 [2024-07-24 21:52:08.226174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.148 [2024-07-24 21:52:08.228927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.148 [2024-07-24 21:52:08.237363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.148 [2024-07-24 21:52:08.238076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.148 [2024-07-24 21:52:08.238118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.148 [2024-07-24 21:52:08.238139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.148 [2024-07-24 21:52:08.238718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.148 [2024-07-24 21:52:08.239018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.148 [2024-07-24 21:52:08.239026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.148 [2024-07-24 21:52:08.239032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.148 [2024-07-24 21:52:08.241715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.148 [2024-07-24 21:52:08.250162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.148 [2024-07-24 21:52:08.250849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.148 [2024-07-24 21:52:08.250891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.148 [2024-07-24 21:52:08.250912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.148 [2024-07-24 21:52:08.251507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.148 [2024-07-24 21:52:08.251981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.148 [2024-07-24 21:52:08.251989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.148 [2024-07-24 21:52:08.251995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.148 [2024-07-24 21:52:08.254708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.409 [2024-07-24 21:52:08.263239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.409 [2024-07-24 21:52:08.263936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.409 [2024-07-24 21:52:08.263978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.409 [2024-07-24 21:52:08.263999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.409 [2024-07-24 21:52:08.264596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.409 [2024-07-24 21:52:08.265094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.409 [2024-07-24 21:52:08.265103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.409 [2024-07-24 21:52:08.265109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.409 [2024-07-24 21:52:08.268960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.409 [2024-07-24 21:52:08.276936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.409 [2024-07-24 21:52:08.277652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.409 [2024-07-24 21:52:08.277696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.409 [2024-07-24 21:52:08.277717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.409 [2024-07-24 21:52:08.278109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.409 [2024-07-24 21:52:08.278282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.409 [2024-07-24 21:52:08.278290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.409 [2024-07-24 21:52:08.278296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.409 [2024-07-24 21:52:08.281005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.409 [2024-07-24 21:52:08.289756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.409 [2024-07-24 21:52:08.290452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.409 [2024-07-24 21:52:08.290495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.409 [2024-07-24 21:52:08.290516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.409 [2024-07-24 21:52:08.290881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.410 [2024-07-24 21:52:08.291049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.410 [2024-07-24 21:52:08.291057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.410 [2024-07-24 21:52:08.291063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.410 [2024-07-24 21:52:08.293763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.410 [2024-07-24 21:52:08.302614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.410 [2024-07-24 21:52:08.303293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.410 [2024-07-24 21:52:08.303335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.410 [2024-07-24 21:52:08.303363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.410 [2024-07-24 21:52:08.303877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.410 [2024-07-24 21:52:08.304040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.410 [2024-07-24 21:52:08.304054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.410 [2024-07-24 21:52:08.304059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.410 [2024-07-24 21:52:08.306821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.410 [2024-07-24 21:52:08.315439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.410 [2024-07-24 21:52:08.316065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.410 [2024-07-24 21:52:08.316106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.410 [2024-07-24 21:52:08.316128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.410 [2024-07-24 21:52:08.316707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.410 [2024-07-24 21:52:08.317196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.410 [2024-07-24 21:52:08.317204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.410 [2024-07-24 21:52:08.317210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.410 [2024-07-24 21:52:08.319892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.410 [2024-07-24 21:52:08.328343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.410 [2024-07-24 21:52:08.329005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.410 [2024-07-24 21:52:08.329059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.410 [2024-07-24 21:52:08.329081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.410 [2024-07-24 21:52:08.329363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.410 [2024-07-24 21:52:08.329535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.410 [2024-07-24 21:52:08.329543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.410 [2024-07-24 21:52:08.329549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.410 [2024-07-24 21:52:08.332306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.410 [2024-07-24 21:52:08.341263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.410 [2024-07-24 21:52:08.341913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.410 [2024-07-24 21:52:08.341929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.410 [2024-07-24 21:52:08.341935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.410 [2024-07-24 21:52:08.342121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.410 [2024-07-24 21:52:08.342293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.410 [2024-07-24 21:52:08.342303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.410 [2024-07-24 21:52:08.342309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.410 [2024-07-24 21:52:08.344992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.410 [2024-07-24 21:52:08.354181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.410 [2024-07-24 21:52:08.354883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.410 [2024-07-24 21:52:08.354898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.410 [2024-07-24 21:52:08.354905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.410 [2024-07-24 21:52:08.355081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.410 [2024-07-24 21:52:08.355253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.410 [2024-07-24 21:52:08.355261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.410 [2024-07-24 21:52:08.355267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.410 [2024-07-24 21:52:08.357949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.410 [2024-07-24 21:52:08.366982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.410 [2024-07-24 21:52:08.367701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.410 [2024-07-24 21:52:08.367742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.410 [2024-07-24 21:52:08.367763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.410 [2024-07-24 21:52:08.368087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.410 [2024-07-24 21:52:08.368259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.410 [2024-07-24 21:52:08.368267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.410 [2024-07-24 21:52:08.368274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.410 [2024-07-24 21:52:08.370963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.410 [2024-07-24 21:52:08.379854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.410 [2024-07-24 21:52:08.380489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.410 [2024-07-24 21:52:08.380531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.410 [2024-07-24 21:52:08.380552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.410 [2024-07-24 21:52:08.381009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.410 [2024-07-24 21:52:08.381187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.410 [2024-07-24 21:52:08.381195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.410 [2024-07-24 21:52:08.381201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.410 [2024-07-24 21:52:08.383876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.410 [2024-07-24 21:52:08.392710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.410 [2024-07-24 21:52:08.393357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.410 [2024-07-24 21:52:08.393400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.410 [2024-07-24 21:52:08.393421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.410 [2024-07-24 21:52:08.393999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.410 [2024-07-24 21:52:08.394478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.410 [2024-07-24 21:52:08.394487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.410 [2024-07-24 21:52:08.394493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.410 [2024-07-24 21:52:08.397177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.410 [2024-07-24 21:52:08.405657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.410 [2024-07-24 21:52:08.406328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.410 [2024-07-24 21:52:08.406371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.410 [2024-07-24 21:52:08.406393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.410 [2024-07-24 21:52:08.406919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.410 [2024-07-24 21:52:08.407177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.410 [2024-07-24 21:52:08.407189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.410 [2024-07-24 21:52:08.407198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.410 [2024-07-24 21:52:08.411249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.410 [2024-07-24 21:52:08.419027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.410 [2024-07-24 21:52:08.419721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.411 [2024-07-24 21:52:08.419764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.411 [2024-07-24 21:52:08.419786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.411 [2024-07-24 21:52:08.420380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.411 [2024-07-24 21:52:08.420844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.411 [2024-07-24 21:52:08.420852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.411 [2024-07-24 21:52:08.420857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.411 [2024-07-24 21:52:08.423579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.411 [2024-07-24 21:52:08.431883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.411 [2024-07-24 21:52:08.432587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.411 [2024-07-24 21:52:08.432629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.411 [2024-07-24 21:52:08.432657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.411 [2024-07-24 21:52:08.433250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.411 [2024-07-24 21:52:08.433592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.411 [2024-07-24 21:52:08.433600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.411 [2024-07-24 21:52:08.433606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.411 [2024-07-24 21:52:08.436290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.411 [2024-07-24 21:52:08.444713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.411 [2024-07-24 21:52:08.445416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.411 [2024-07-24 21:52:08.445457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.411 [2024-07-24 21:52:08.445478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.411 [2024-07-24 21:52:08.446070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.411 [2024-07-24 21:52:08.446593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.411 [2024-07-24 21:52:08.446601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.411 [2024-07-24 21:52:08.446606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.411 [2024-07-24 21:52:08.449289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.411 [2024-07-24 21:52:08.457866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.411 [2024-07-24 21:52:08.458523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.411 [2024-07-24 21:52:08.458564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.411 [2024-07-24 21:52:08.458585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.411 [2024-07-24 21:52:08.459176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.411 [2024-07-24 21:52:08.459758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.411 [2024-07-24 21:52:08.459782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.411 [2024-07-24 21:52:08.459802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.411 [2024-07-24 21:52:08.462628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.411 [2024-07-24 21:52:08.470839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.411 [2024-07-24 21:52:08.471421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.411 [2024-07-24 21:52:08.471467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.411 [2024-07-24 21:52:08.471489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.411 [2024-07-24 21:52:08.471833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.411 [2024-07-24 21:52:08.472006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.411 [2024-07-24 21:52:08.472017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.411 [2024-07-24 21:52:08.472024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.411 [2024-07-24 21:52:08.474714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.411 [2024-07-24 21:52:08.483707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.411 [2024-07-24 21:52:08.484270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.411 [2024-07-24 21:52:08.484313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.411 [2024-07-24 21:52:08.484335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.411 [2024-07-24 21:52:08.484914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.411 [2024-07-24 21:52:08.485505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.411 [2024-07-24 21:52:08.485531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.411 [2024-07-24 21:52:08.485552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.411 [2024-07-24 21:52:08.488333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.411 [2024-07-24 21:52:08.496545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.411 [2024-07-24 21:52:08.497145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.411 [2024-07-24 21:52:08.497189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.411 [2024-07-24 21:52:08.497210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.411 [2024-07-24 21:52:08.497447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.411 [2024-07-24 21:52:08.497645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.411 [2024-07-24 21:52:08.497656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.411 [2024-07-24 21:52:08.497665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.411 [2024-07-24 21:52:08.501731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.411 [2024-07-24 21:52:08.509988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.411 [2024-07-24 21:52:08.510703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.411 [2024-07-24 21:52:08.510745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.411 [2024-07-24 21:52:08.510767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.411 [2024-07-24 21:52:08.511360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.411 [2024-07-24 21:52:08.511552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.411 [2024-07-24 21:52:08.511559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.411 [2024-07-24 21:52:08.511565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.411 [2024-07-24 21:52:08.514287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.411 [2024-07-24 21:52:08.522958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.411 [2024-07-24 21:52:08.523590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.411 [2024-07-24 21:52:08.523606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.411 [2024-07-24 21:52:08.523613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.411 [2024-07-24 21:52:08.523790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.411 [2024-07-24 21:52:08.523968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.411 [2024-07-24 21:52:08.523976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.411 [2024-07-24 21:52:08.523982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.674 [2024-07-24 21:52:08.526811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.674 [2024-07-24 21:52:08.535900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.674 [2024-07-24 21:52:08.536581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.674 [2024-07-24 21:52:08.536624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.674 [2024-07-24 21:52:08.536645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.674 [2024-07-24 21:52:08.537240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.674 [2024-07-24 21:52:08.537730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.674 [2024-07-24 21:52:08.537741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.674 [2024-07-24 21:52:08.537750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.674 [2024-07-24 21:52:08.541811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.674 [2024-07-24 21:52:08.549572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.674 [2024-07-24 21:52:08.550284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.674 [2024-07-24 21:52:08.550327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.674 [2024-07-24 21:52:08.550348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.674 [2024-07-24 21:52:08.550850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.674 [2024-07-24 21:52:08.551018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.674 [2024-07-24 21:52:08.551026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.674 [2024-07-24 21:52:08.551032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.674 [2024-07-24 21:52:08.553773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.674 [2024-07-24 21:52:08.562399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.674 [2024-07-24 21:52:08.563071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.674 [2024-07-24 21:52:08.563112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.674 [2024-07-24 21:52:08.563134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.674 [2024-07-24 21:52:08.563720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.674 [2024-07-24 21:52:08.564191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.674 [2024-07-24 21:52:08.564199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.674 [2024-07-24 21:52:08.564205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.674 [2024-07-24 21:52:08.566886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.674 [2024-07-24 21:52:08.575427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.674 [2024-07-24 21:52:08.576024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.674 [2024-07-24 21:52:08.576080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.674 [2024-07-24 21:52:08.576103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.674 [2024-07-24 21:52:08.576624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.674 [2024-07-24 21:52:08.576797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.674 [2024-07-24 21:52:08.576805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.674 [2024-07-24 21:52:08.576811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.674 [2024-07-24 21:52:08.579583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.674 [2024-07-24 21:52:08.588437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.674 [2024-07-24 21:52:08.589066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.674 [2024-07-24 21:52:08.589109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.674 [2024-07-24 21:52:08.589130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.674 [2024-07-24 21:52:08.589626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.674 [2024-07-24 21:52:08.589799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.674 [2024-07-24 21:52:08.589807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.674 [2024-07-24 21:52:08.589813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.674 [2024-07-24 21:52:08.592507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.674 [2024-07-24 21:52:08.601490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.674 [2024-07-24 21:52:08.602198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.674 [2024-07-24 21:52:08.602241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.674 [2024-07-24 21:52:08.602263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.674 [2024-07-24 21:52:08.602721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.674 [2024-07-24 21:52:08.602893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.675 [2024-07-24 21:52:08.602901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.675 [2024-07-24 21:52:08.602910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.675 [2024-07-24 21:52:08.605645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.675 [2024-07-24 21:52:08.614441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.675 [2024-07-24 21:52:08.615144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.675 [2024-07-24 21:52:08.615188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.675 [2024-07-24 21:52:08.615210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.675 [2024-07-24 21:52:08.615790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.675 [2024-07-24 21:52:08.616055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.675 [2024-07-24 21:52:08.616064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.675 [2024-07-24 21:52:08.616071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.675 [2024-07-24 21:52:08.618802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.675 [2024-07-24 21:52:08.627365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.675 [2024-07-24 21:52:08.628000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.675 [2024-07-24 21:52:08.628057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.675 [2024-07-24 21:52:08.628080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.675 [2024-07-24 21:52:08.628659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.675 [2024-07-24 21:52:08.629120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.675 [2024-07-24 21:52:08.629131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.675 [2024-07-24 21:52:08.629139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.675 [2024-07-24 21:52:08.633208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.675 [2024-07-24 21:52:08.641036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.675 [2024-07-24 21:52:08.641733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.675 [2024-07-24 21:52:08.641776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.675 [2024-07-24 21:52:08.641797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.675 [2024-07-24 21:52:08.642145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.675 [2024-07-24 21:52:08.642318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.675 [2024-07-24 21:52:08.642325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.675 [2024-07-24 21:52:08.642331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.675 [2024-07-24 21:52:08.645083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.675 [2024-07-24 21:52:08.653999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.675 [2024-07-24 21:52:08.654706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.675 [2024-07-24 21:52:08.654755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.675 [2024-07-24 21:52:08.654777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.675 [2024-07-24 21:52:08.655038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.675 [2024-07-24 21:52:08.655215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.675 [2024-07-24 21:52:08.655223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.675 [2024-07-24 21:52:08.655229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.675 [2024-07-24 21:52:08.657837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.675 [2024-07-24 21:52:08.666982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.675 [2024-07-24 21:52:08.667400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.675 [2024-07-24 21:52:08.667416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.675 [2024-07-24 21:52:08.667423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.675 [2024-07-24 21:52:08.667594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.675 [2024-07-24 21:52:08.667767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.675 [2024-07-24 21:52:08.667774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.675 [2024-07-24 21:52:08.667781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.675 [2024-07-24 21:52:08.670507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.675 [2024-07-24 21:52:08.679942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.675 [2024-07-24 21:52:08.680643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.675 [2024-07-24 21:52:08.680687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.675 [2024-07-24 21:52:08.680709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.675 [2024-07-24 21:52:08.681106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.675 [2024-07-24 21:52:08.681278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.675 [2024-07-24 21:52:08.681286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.675 [2024-07-24 21:52:08.681292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.675 [2024-07-24 21:52:08.684046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.675 [2024-07-24 21:52:08.692810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.675 [2024-07-24 21:52:08.693510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.675 [2024-07-24 21:52:08.693525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.675 [2024-07-24 21:52:08.693532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.675 [2024-07-24 21:52:08.693699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.675 [2024-07-24 21:52:08.693865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.675 [2024-07-24 21:52:08.693872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.675 [2024-07-24 21:52:08.693877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.675 [2024-07-24 21:52:08.696574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.675 [2024-07-24 21:52:08.705765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.675 [2024-07-24 21:52:08.706453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.675 [2024-07-24 21:52:08.706470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.675 [2024-07-24 21:52:08.706477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.675 [2024-07-24 21:52:08.706650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.675 [2024-07-24 21:52:08.706822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.675 [2024-07-24 21:52:08.706830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.675 [2024-07-24 21:52:08.706836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.675 [2024-07-24 21:52:08.709699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.675 [2024-07-24 21:52:08.718758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.675 [2024-07-24 21:52:08.719424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.675 [2024-07-24 21:52:08.719466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.675 [2024-07-24 21:52:08.719488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.675 [2024-07-24 21:52:08.720078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.675 [2024-07-24 21:52:08.720357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.675 [2024-07-24 21:52:08.720368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.675 [2024-07-24 21:52:08.720377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.675 [2024-07-24 21:52:08.724445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.675 [2024-07-24 21:52:08.732267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.675 [2024-07-24 21:52:08.732975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.675 [2024-07-24 21:52:08.733017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.675 [2024-07-24 21:52:08.733038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.675 [2024-07-24 21:52:08.733283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.676 [2024-07-24 21:52:08.733455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.676 [2024-07-24 21:52:08.733463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.676 [2024-07-24 21:52:08.733469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.676 [2024-07-24 21:52:08.736197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.676 [2024-07-24 21:52:08.745106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.676 [2024-07-24 21:52:08.745787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.676 [2024-07-24 21:52:08.745828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.676 [2024-07-24 21:52:08.745850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.676 [2024-07-24 21:52:08.746257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.676 [2024-07-24 21:52:08.746430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.676 [2024-07-24 21:52:08.746437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.676 [2024-07-24 21:52:08.746444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.676 [2024-07-24 21:52:08.749097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.676 [2024-07-24 21:52:08.758029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.676 [2024-07-24 21:52:08.758718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.676 [2024-07-24 21:52:08.758760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.676 [2024-07-24 21:52:08.758781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.676 [2024-07-24 21:52:08.759050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.676 [2024-07-24 21:52:08.759238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.676 [2024-07-24 21:52:08.759246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.676 [2024-07-24 21:52:08.759252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.676 [2024-07-24 21:52:08.761910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.676 [2024-07-24 21:52:08.770961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.676 [2024-07-24 21:52:08.771676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.676 [2024-07-24 21:52:08.771718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.676 [2024-07-24 21:52:08.771740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.676 [2024-07-24 21:52:08.772333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.676 [2024-07-24 21:52:08.772709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.676 [2024-07-24 21:52:08.772716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.676 [2024-07-24 21:52:08.772722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.676 [2024-07-24 21:52:08.775404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.676 [2024-07-24 21:52:08.783914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.676 [2024-07-24 21:52:08.784558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.676 [2024-07-24 21:52:08.784599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.676 [2024-07-24 21:52:08.784627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.676 [2024-07-24 21:52:08.785107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.676 [2024-07-24 21:52:08.785286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.676 [2024-07-24 21:52:08.785293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.676 [2024-07-24 21:52:08.785300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.937 [2024-07-24 21:52:08.788093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.937 [2024-07-24 21:52:08.796939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.937 [2024-07-24 21:52:08.797661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.937 [2024-07-24 21:52:08.797705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.937 [2024-07-24 21:52:08.797727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.937 [2024-07-24 21:52:08.798082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.937 [2024-07-24 21:52:08.798255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.937 [2024-07-24 21:52:08.798263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.937 [2024-07-24 21:52:08.798269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.937 [2024-07-24 21:52:08.800949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.937 [2024-07-24 21:52:08.809896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.937 [2024-07-24 21:52:08.810616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.937 [2024-07-24 21:52:08.810659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.937 [2024-07-24 21:52:08.810681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.937 [2024-07-24 21:52:08.811259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.937 [2024-07-24 21:52:08.811432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.937 [2024-07-24 21:52:08.811440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.937 [2024-07-24 21:52:08.811446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.937 [2024-07-24 21:52:08.814130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.937 [2024-07-24 21:52:08.822693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.937 [2024-07-24 21:52:08.823406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.937 [2024-07-24 21:52:08.823450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.937 [2024-07-24 21:52:08.823471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.937 [2024-07-24 21:52:08.824064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.937 [2024-07-24 21:52:08.824435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.937 [2024-07-24 21:52:08.824447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.937 [2024-07-24 21:52:08.824453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.937 [2024-07-24 21:52:08.827138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.937 [2024-07-24 21:52:08.835612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.937 [2024-07-24 21:52:08.836291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.938 [2024-07-24 21:52:08.836333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.938 [2024-07-24 21:52:08.836354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.938 [2024-07-24 21:52:08.836934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.938 [2024-07-24 21:52:08.837446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.938 [2024-07-24 21:52:08.837455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.938 [2024-07-24 21:52:08.837461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.938 [2024-07-24 21:52:08.840119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.938 [2024-07-24 21:52:08.848520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.938 [2024-07-24 21:52:08.849213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.938 [2024-07-24 21:52:08.849254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.938 [2024-07-24 21:52:08.849275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.938 [2024-07-24 21:52:08.849854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.938 [2024-07-24 21:52:08.850067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.938 [2024-07-24 21:52:08.850091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.938 [2024-07-24 21:52:08.850097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.938 [2024-07-24 21:52:08.852783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.938 [2024-07-24 21:52:08.861491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.938 [2024-07-24 21:52:08.862201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.938 [2024-07-24 21:52:08.862243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.938 [2024-07-24 21:52:08.862264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.938 [2024-07-24 21:52:08.862647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.938 [2024-07-24 21:52:08.862820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.938 [2024-07-24 21:52:08.862828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.938 [2024-07-24 21:52:08.862834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.938 [2024-07-24 21:52:08.865515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.938 [2024-07-24 21:52:08.874354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.938 [2024-07-24 21:52:08.875021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.938 [2024-07-24 21:52:08.875076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.938 [2024-07-24 21:52:08.875098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.938 [2024-07-24 21:52:08.875456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.938 [2024-07-24 21:52:08.875628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.938 [2024-07-24 21:52:08.875635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.938 [2024-07-24 21:52:08.875641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.938 [2024-07-24 21:52:08.878350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.938 [2024-07-24 21:52:08.887207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.938 [2024-07-24 21:52:08.887895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.938 [2024-07-24 21:52:08.887938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.938 [2024-07-24 21:52:08.887959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.938 [2024-07-24 21:52:08.888553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.938 [2024-07-24 21:52:08.888821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.938 [2024-07-24 21:52:08.888829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.938 [2024-07-24 21:52:08.888835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.938 [2024-07-24 21:52:08.891518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.938 [2024-07-24 21:52:08.900085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.938 [2024-07-24 21:52:08.900770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.938 [2024-07-24 21:52:08.900810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.938 [2024-07-24 21:52:08.900831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.938 [2024-07-24 21:52:08.901421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.938 [2024-07-24 21:52:08.901694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.938 [2024-07-24 21:52:08.901702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.938 [2024-07-24 21:52:08.901707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.938 [2024-07-24 21:52:08.905717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.938 [2024-07-24 21:52:08.913846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.938 [2024-07-24 21:52:08.914529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.938 [2024-07-24 21:52:08.914545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.938 [2024-07-24 21:52:08.914552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.938 [2024-07-24 21:52:08.914726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.938 [2024-07-24 21:52:08.914899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.938 [2024-07-24 21:52:08.914906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.938 [2024-07-24 21:52:08.914912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.938 [2024-07-24 21:52:08.917666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.938 [2024-07-24 21:52:08.926764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.938 [2024-07-24 21:52:08.927372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.938 [2024-07-24 21:52:08.927416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.938 [2024-07-24 21:52:08.927437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.938 [2024-07-24 21:52:08.928018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.938 [2024-07-24 21:52:08.928561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.938 [2024-07-24 21:52:08.928570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.938 [2024-07-24 21:52:08.928575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.938 [2024-07-24 21:52:08.931293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.938 [2024-07-24 21:52:08.939663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.938 [2024-07-24 21:52:08.940308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.938 [2024-07-24 21:52:08.940323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.938 [2024-07-24 21:52:08.940330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.938 [2024-07-24 21:52:08.940492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.938 [2024-07-24 21:52:08.940654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.938 [2024-07-24 21:52:08.940662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.938 [2024-07-24 21:52:08.940667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.938 [2024-07-24 21:52:08.943413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.938 [2024-07-24 21:52:08.952581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.938 [2024-07-24 21:52:08.953264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.938 [2024-07-24 21:52:08.953305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.938 [2024-07-24 21:52:08.953326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.938 [2024-07-24 21:52:08.953723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.938 [2024-07-24 21:52:08.953887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.938 [2024-07-24 21:52:08.953895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.939 [2024-07-24 21:52:08.953904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.939 [2024-07-24 21:52:08.956655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.939 [2024-07-24 21:52:08.965601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.939 [2024-07-24 21:52:08.966261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.939 [2024-07-24 21:52:08.966305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.939 [2024-07-24 21:52:08.966326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.939 [2024-07-24 21:52:08.966761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.939 [2024-07-24 21:52:08.966933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.939 [2024-07-24 21:52:08.966941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.939 [2024-07-24 21:52:08.966947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.939 [2024-07-24 21:52:08.969634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.939 [2024-07-24 21:52:08.978652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.939 [2024-07-24 21:52:08.979354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.939 [2024-07-24 21:52:08.979396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.939 [2024-07-24 21:52:08.979418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.939 [2024-07-24 21:52:08.979996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.939 [2024-07-24 21:52:08.980220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.939 [2024-07-24 21:52:08.980229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.939 [2024-07-24 21:52:08.980235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.939 [2024-07-24 21:52:08.983084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.939 [2024-07-24 21:52:08.991748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.939 [2024-07-24 21:52:08.992443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.939 [2024-07-24 21:52:08.992486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.939 [2024-07-24 21:52:08.992507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.939 [2024-07-24 21:52:08.992906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.939 [2024-07-24 21:52:08.993090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.939 [2024-07-24 21:52:08.993098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.939 [2024-07-24 21:52:08.993105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.939 [2024-07-24 21:52:08.995802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.939 [2024-07-24 21:52:09.004663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.939 [2024-07-24 21:52:09.005358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.939 [2024-07-24 21:52:09.005401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.939 [2024-07-24 21:52:09.005423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.939 [2024-07-24 21:52:09.005893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.939 [2024-07-24 21:52:09.006072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.939 [2024-07-24 21:52:09.006081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.939 [2024-07-24 21:52:09.006087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.939 [2024-07-24 21:52:09.008824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.939 [2024-07-24 21:52:09.017642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.939 [2024-07-24 21:52:09.018315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.939 [2024-07-24 21:52:09.018331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.939 [2024-07-24 21:52:09.018337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.939 [2024-07-24 21:52:09.018500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.939 [2024-07-24 21:52:09.018662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.939 [2024-07-24 21:52:09.018670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.939 [2024-07-24 21:52:09.018676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.939 [2024-07-24 21:52:09.021404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.939 [2024-07-24 21:52:09.030502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.939 [2024-07-24 21:52:09.031195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.939 [2024-07-24 21:52:09.031240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.939 [2024-07-24 21:52:09.031262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.939 [2024-07-24 21:52:09.031842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.939 [2024-07-24 21:52:09.032293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.939 [2024-07-24 21:52:09.032302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.939 [2024-07-24 21:52:09.032308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.939 [2024-07-24 21:52:09.035063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.939 [2024-07-24 21:52:09.043470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.939 [2024-07-24 21:52:09.044150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.939 [2024-07-24 21:52:09.044194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:00.939 [2024-07-24 21:52:09.044216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:00.939 [2024-07-24 21:52:09.044652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:00.939 [2024-07-24 21:52:09.044816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.939 [2024-07-24 21:52:09.044824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.939 [2024-07-24 21:52:09.044829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.939 [2024-07-24 21:52:09.047616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.201 [2024-07-24 21:52:09.056391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.201 [2024-07-24 21:52:09.057014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.201 [2024-07-24 21:52:09.057031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.201 [2024-07-24 21:52:09.057038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.201 [2024-07-24 21:52:09.057220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.201 [2024-07-24 21:52:09.057405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.201 [2024-07-24 21:52:09.057413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.201 [2024-07-24 21:52:09.057419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.201 [2024-07-24 21:52:09.060261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.201 [2024-07-24 21:52:09.069276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.201 [2024-07-24 21:52:09.070036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.201 [2024-07-24 21:52:09.070101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.201 [2024-07-24 21:52:09.070123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.201 [2024-07-24 21:52:09.070703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.201 [2024-07-24 21:52:09.070920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.201 [2024-07-24 21:52:09.070928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.201 [2024-07-24 21:52:09.070935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.201 [2024-07-24 21:52:09.073670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.201 [2024-07-24 21:52:09.082202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.201 [2024-07-24 21:52:09.082914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.201 [2024-07-24 21:52:09.082956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.201 [2024-07-24 21:52:09.082978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.201 [2024-07-24 21:52:09.083577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.201 [2024-07-24 21:52:09.083831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.202 [2024-07-24 21:52:09.083842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.202 [2024-07-24 21:52:09.083856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.202 [2024-07-24 21:52:09.087923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.202 [2024-07-24 21:52:09.095630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.202 [2024-07-24 21:52:09.096313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.202 [2024-07-24 21:52:09.096356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.202 [2024-07-24 21:52:09.096379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.202 [2024-07-24 21:52:09.096957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.202 [2024-07-24 21:52:09.097209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.202 [2024-07-24 21:52:09.097217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.202 [2024-07-24 21:52:09.097223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.202 [2024-07-24 21:52:09.099946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.202 [2024-07-24 21:52:09.108557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.202 [2024-07-24 21:52:09.109275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.202 [2024-07-24 21:52:09.109320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.202 [2024-07-24 21:52:09.109342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.202 [2024-07-24 21:52:09.109907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.202 [2024-07-24 21:52:09.110086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.202 [2024-07-24 21:52:09.110095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.202 [2024-07-24 21:52:09.110101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.202 [2024-07-24 21:52:09.112842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.202 [2024-07-24 21:52:09.121554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.202 [2024-07-24 21:52:09.122296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.202 [2024-07-24 21:52:09.122340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.202 [2024-07-24 21:52:09.122362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.202 [2024-07-24 21:52:09.122891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.202 [2024-07-24 21:52:09.123059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.202 [2024-07-24 21:52:09.123083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.202 [2024-07-24 21:52:09.123090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.202 [2024-07-24 21:52:09.125840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.202 [2024-07-24 21:52:09.134558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.202 [2024-07-24 21:52:09.135256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.202 [2024-07-24 21:52:09.135307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.202 [2024-07-24 21:52:09.135329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.202 [2024-07-24 21:52:09.135888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.202 [2024-07-24 21:52:09.136066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.202 [2024-07-24 21:52:09.136074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.202 [2024-07-24 21:52:09.136080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.202 [2024-07-24 21:52:09.138817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.202 [2024-07-24 21:52:09.147546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.202 [2024-07-24 21:52:09.148253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.202 [2024-07-24 21:52:09.148296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.202 [2024-07-24 21:52:09.148317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.202 [2024-07-24 21:52:09.148903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.202 [2024-07-24 21:52:09.149078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.202 [2024-07-24 21:52:09.149086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.202 [2024-07-24 21:52:09.149092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.202 [2024-07-24 21:52:09.151774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.202 [2024-07-24 21:52:09.160488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.202 [2024-07-24 21:52:09.161134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.202 [2024-07-24 21:52:09.161178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.202 [2024-07-24 21:52:09.161199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.202 [2024-07-24 21:52:09.161715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.202 [2024-07-24 21:52:09.161888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.202 [2024-07-24 21:52:09.161895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.202 [2024-07-24 21:52:09.161901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.202 [2024-07-24 21:52:09.164583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.202 [2024-07-24 21:52:09.173431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.202 [2024-07-24 21:52:09.174120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.202 [2024-07-24 21:52:09.174164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.202 [2024-07-24 21:52:09.174185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.202 [2024-07-24 21:52:09.174526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.202 [2024-07-24 21:52:09.174702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.202 [2024-07-24 21:52:09.174711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.202 [2024-07-24 21:52:09.174716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.202 [2024-07-24 21:52:09.177495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.202 [2024-07-24 21:52:09.186323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.202 [2024-07-24 21:52:09.186949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.202 [2024-07-24 21:52:09.186990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.202 [2024-07-24 21:52:09.187011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.202 [2024-07-24 21:52:09.187382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.202 [2024-07-24 21:52:09.187555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.202 [2024-07-24 21:52:09.187563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.202 [2024-07-24 21:52:09.187569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.202 [2024-07-24 21:52:09.190284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.202 [2024-07-24 21:52:09.199142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.202 [2024-07-24 21:52:09.199812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.202 [2024-07-24 21:52:09.199854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.202 [2024-07-24 21:52:09.199876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.202 [2024-07-24 21:52:09.200365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.202 [2024-07-24 21:52:09.200538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.202 [2024-07-24 21:52:09.200546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.202 [2024-07-24 21:52:09.200552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.202 [2024-07-24 21:52:09.203238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.202 [2024-07-24 21:52:09.212271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.202 [2024-07-24 21:52:09.212866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.202 [2024-07-24 21:52:09.212907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.202 [2024-07-24 21:52:09.212927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.202 [2024-07-24 21:52:09.213377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.202 [2024-07-24 21:52:09.213551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.203 [2024-07-24 21:52:09.213558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.203 [2024-07-24 21:52:09.213564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.203 [2024-07-24 21:52:09.216384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.203 [2024-07-24 21:52:09.225210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.203 [2024-07-24 21:52:09.225753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.203 [2024-07-24 21:52:09.225769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.203 [2024-07-24 21:52:09.225776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.203 [2024-07-24 21:52:09.225948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.203 [2024-07-24 21:52:09.226125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.203 [2024-07-24 21:52:09.226133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.203 [2024-07-24 21:52:09.226139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.203 [2024-07-24 21:52:09.228826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.203 [2024-07-24 21:52:09.238251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.203 [2024-07-24 21:52:09.238940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.203 [2024-07-24 21:52:09.238956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.203 [2024-07-24 21:52:09.238962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.203 [2024-07-24 21:52:09.239137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.203 [2024-07-24 21:52:09.239309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.203 [2024-07-24 21:52:09.239317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.203 [2024-07-24 21:52:09.239323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.203 [2024-07-24 21:52:09.242048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.203 [2024-07-24 21:52:09.251242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.203 [2024-07-24 21:52:09.251794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.203 [2024-07-24 21:52:09.251809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.203 [2024-07-24 21:52:09.251815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.203 [2024-07-24 21:52:09.251987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.203 [2024-07-24 21:52:09.252165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.203 [2024-07-24 21:52:09.252173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.203 [2024-07-24 21:52:09.252179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.203 [2024-07-24 21:52:09.254860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.203 [2024-07-24 21:52:09.264228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.203 [2024-07-24 21:52:09.264863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.203 [2024-07-24 21:52:09.264905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.203 [2024-07-24 21:52:09.264933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.203 [2024-07-24 21:52:09.265460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.203 [2024-07-24 21:52:09.265632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.203 [2024-07-24 21:52:09.265640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.203 [2024-07-24 21:52:09.265646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.203 [2024-07-24 21:52:09.268337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.203 [2024-07-24 21:52:09.277230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.203 [2024-07-24 21:52:09.278058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.203 [2024-07-24 21:52:09.278101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.203 [2024-07-24 21:52:09.278123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.203 [2024-07-24 21:52:09.278576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.203 [2024-07-24 21:52:09.278749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.203 [2024-07-24 21:52:09.278757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.203 [2024-07-24 21:52:09.278763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.203 [2024-07-24 21:52:09.281449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.203 [2024-07-24 21:52:09.290114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.203 [2024-07-24 21:52:09.290722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.203 [2024-07-24 21:52:09.290763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.203 [2024-07-24 21:52:09.290785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.203 [2024-07-24 21:52:09.291377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.203 [2024-07-24 21:52:09.291638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.203 [2024-07-24 21:52:09.291646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.203 [2024-07-24 21:52:09.291652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.203 [2024-07-24 21:52:09.294370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.203 [2024-07-24 21:52:09.303171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.203 [2024-07-24 21:52:09.303804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.203 [2024-07-24 21:52:09.303846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.203 [2024-07-24 21:52:09.303867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.203 [2024-07-24 21:52:09.304455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.203 [2024-07-24 21:52:09.304772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.203 [2024-07-24 21:52:09.304783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.203 [2024-07-24 21:52:09.304791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.203 [2024-07-24 21:52:09.307523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.465 [2024-07-24 21:52:09.316247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.465 [2024-07-24 21:52:09.316865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.465 [2024-07-24 21:52:09.316906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.465 [2024-07-24 21:52:09.316928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.465 [2024-07-24 21:52:09.317519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.465 [2024-07-24 21:52:09.317817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.465 [2024-07-24 21:52:09.317825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.465 [2024-07-24 21:52:09.317832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.465 [2024-07-24 21:52:09.320559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.465 [2024-07-24 21:52:09.329277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.465 [2024-07-24 21:52:09.329843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.465 [2024-07-24 21:52:09.329885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.465 [2024-07-24 21:52:09.329907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.465 [2024-07-24 21:52:09.330500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.465 [2024-07-24 21:52:09.331033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.465 [2024-07-24 21:52:09.331041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.465 [2024-07-24 21:52:09.331054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.465 [2024-07-24 21:52:09.333736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.465 [2024-07-24 21:52:09.342279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.465 [2024-07-24 21:52:09.342862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.465 [2024-07-24 21:52:09.342905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.465 [2024-07-24 21:52:09.342926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.465 [2024-07-24 21:52:09.343518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.465 [2024-07-24 21:52:09.343831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.465 [2024-07-24 21:52:09.343840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.465 [2024-07-24 21:52:09.343846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.465 [2024-07-24 21:52:09.346553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.465 [2024-07-24 21:52:09.355191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.465 [2024-07-24 21:52:09.355786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.465 [2024-07-24 21:52:09.355827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.465 [2024-07-24 21:52:09.355848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.465 [2024-07-24 21:52:09.356440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.465 [2024-07-24 21:52:09.356846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.465 [2024-07-24 21:52:09.356858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.465 [2024-07-24 21:52:09.356866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.465 [2024-07-24 21:52:09.360924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.465 [2024-07-24 21:52:09.368645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.465 [2024-07-24 21:52:09.369332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.465 [2024-07-24 21:52:09.369375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.465 [2024-07-24 21:52:09.369396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.465 [2024-07-24 21:52:09.369818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.465 [2024-07-24 21:52:09.369991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.465 [2024-07-24 21:52:09.369998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.465 [2024-07-24 21:52:09.370005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.465 [2024-07-24 21:52:09.372752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.465 [2024-07-24 21:52:09.381552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.465 [2024-07-24 21:52:09.382220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.465 [2024-07-24 21:52:09.382235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.465 [2024-07-24 21:52:09.382242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.465 [2024-07-24 21:52:09.382405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.465 [2024-07-24 21:52:09.382567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.465 [2024-07-24 21:52:09.382574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.465 [2024-07-24 21:52:09.382580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.465 [2024-07-24 21:52:09.385247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.465 [2024-07-24 21:52:09.394571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.465 [2024-07-24 21:52:09.395257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.465 [2024-07-24 21:52:09.395300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.465 [2024-07-24 21:52:09.395320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.465 [2024-07-24 21:52:09.395885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.465 [2024-07-24 21:52:09.396056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.465 [2024-07-24 21:52:09.396064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.465 [2024-07-24 21:52:09.396088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.465 [2024-07-24 21:52:09.398814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.465 [2024-07-24 21:52:09.407507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.465 [2024-07-24 21:52:09.408213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.465 [2024-07-24 21:52:09.408230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.465 [2024-07-24 21:52:09.408236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.465 [2024-07-24 21:52:09.408399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.465 [2024-07-24 21:52:09.408560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.465 [2024-07-24 21:52:09.408568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.465 [2024-07-24 21:52:09.408573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.465 [2024-07-24 21:52:09.411277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.465 [2024-07-24 21:52:09.420327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.465 [2024-07-24 21:52:09.421009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.465 [2024-07-24 21:52:09.421063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.465 [2024-07-24 21:52:09.421085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.465 [2024-07-24 21:52:09.421591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.465 [2024-07-24 21:52:09.421763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.465 [2024-07-24 21:52:09.421771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.465 [2024-07-24 21:52:09.421777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.465 [2024-07-24 21:52:09.424465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.465 [2024-07-24 21:52:09.433242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.465 [2024-07-24 21:52:09.433930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.465 [2024-07-24 21:52:09.433973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.466 [2024-07-24 21:52:09.433994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.466 [2024-07-24 21:52:09.434391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.466 [2024-07-24 21:52:09.434564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.466 [2024-07-24 21:52:09.434572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.466 [2024-07-24 21:52:09.434581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.466 [2024-07-24 21:52:09.437265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.466 [2024-07-24 21:52:09.446144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.466 [2024-07-24 21:52:09.446846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.466 [2024-07-24 21:52:09.446888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.466 [2024-07-24 21:52:09.446909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.466 [2024-07-24 21:52:09.447214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.466 [2024-07-24 21:52:09.447387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.466 [2024-07-24 21:52:09.447394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.466 [2024-07-24 21:52:09.447400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.466 [2024-07-24 21:52:09.450079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.466 [2024-07-24 21:52:09.458955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.466 [2024-07-24 21:52:09.459632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.466 [2024-07-24 21:52:09.459675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.466 [2024-07-24 21:52:09.459696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.466 [2024-07-24 21:52:09.459992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.466 [2024-07-24 21:52:09.460170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.466 [2024-07-24 21:52:09.460178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.466 [2024-07-24 21:52:09.460184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.466 [2024-07-24 21:52:09.462868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.466 [2024-07-24 21:52:09.472054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.466 [2024-07-24 21:52:09.472752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.466 [2024-07-24 21:52:09.472796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.466 [2024-07-24 21:52:09.472818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.466 [2024-07-24 21:52:09.473419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.466 [2024-07-24 21:52:09.473956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.466 [2024-07-24 21:52:09.473964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.466 [2024-07-24 21:52:09.473971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.466 [2024-07-24 21:52:09.476713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.466 [2024-07-24 21:52:09.484949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.466 [2024-07-24 21:52:09.485592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.466 [2024-07-24 21:52:09.485634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.466 [2024-07-24 21:52:09.485656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.466 [2024-07-24 21:52:09.486085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.466 [2024-07-24 21:52:09.486258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.466 [2024-07-24 21:52:09.486266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.466 [2024-07-24 21:52:09.486272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.466 [2024-07-24 21:52:09.488953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.466 [2024-07-24 21:52:09.497864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.466 [2024-07-24 21:52:09.498544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.466 [2024-07-24 21:52:09.498588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.466 [2024-07-24 21:52:09.498610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.466 [2024-07-24 21:52:09.499074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.466 [2024-07-24 21:52:09.499246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.466 [2024-07-24 21:52:09.499254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.466 [2024-07-24 21:52:09.499260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.466 [2024-07-24 21:52:09.501940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.466 [2024-07-24 21:52:09.510747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.466 [2024-07-24 21:52:09.511434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.466 [2024-07-24 21:52:09.511478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.466 [2024-07-24 21:52:09.511499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.466 [2024-07-24 21:52:09.511842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.466 [2024-07-24 21:52:09.512005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.466 [2024-07-24 21:52:09.512012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.466 [2024-07-24 21:52:09.512018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.466 [2024-07-24 21:52:09.514781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.466 [2024-07-24 21:52:09.523598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.466 [2024-07-24 21:52:09.524255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.466 [2024-07-24 21:52:09.524300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.466 [2024-07-24 21:52:09.524322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.466 [2024-07-24 21:52:09.524903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.466 [2024-07-24 21:52:09.525069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.466 [2024-07-24 21:52:09.525077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.466 [2024-07-24 21:52:09.525083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.466 [2024-07-24 21:52:09.527680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.466 [2024-07-24 21:52:09.536551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.466 [2024-07-24 21:52:09.537236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.466 [2024-07-24 21:52:09.537279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.466 [2024-07-24 21:52:09.537301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.466 [2024-07-24 21:52:09.537879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.466 [2024-07-24 21:52:09.538083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.466 [2024-07-24 21:52:09.538091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.466 [2024-07-24 21:52:09.538097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.466 [2024-07-24 21:52:09.540773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.466 [2024-07-24 21:52:09.549359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.466 [2024-07-24 21:52:09.550003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.466 [2024-07-24 21:52:09.550018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.466 [2024-07-24 21:52:09.550024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.466 [2024-07-24 21:52:09.550217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.466 [2024-07-24 21:52:09.550390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.466 [2024-07-24 21:52:09.550398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.466 [2024-07-24 21:52:09.550404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.466 [2024-07-24 21:52:09.553085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.466 [2024-07-24 21:52:09.562267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.466 [2024-07-24 21:52:09.562949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.466 [2024-07-24 21:52:09.562991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.467 [2024-07-24 21:52:09.563012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.467 [2024-07-24 21:52:09.563387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.467 [2024-07-24 21:52:09.563560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.467 [2024-07-24 21:52:09.563568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.467 [2024-07-24 21:52:09.563574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.467 [2024-07-24 21:52:09.566256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.467 [2024-07-24 21:52:09.575231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.467 [2024-07-24 21:52:09.575882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.467 [2024-07-24 21:52:09.575923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.467 [2024-07-24 21:52:09.575944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.467 [2024-07-24 21:52:09.576371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.467 [2024-07-24 21:52:09.576549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.467 [2024-07-24 21:52:09.576558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.467 [2024-07-24 21:52:09.576564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.467 [2024-07-24 21:52:09.579356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.731 [2024-07-24 21:52:09.588255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.731 [2024-07-24 21:52:09.588935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.731 [2024-07-24 21:52:09.588977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.731 [2024-07-24 21:52:09.588998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.731 [2024-07-24 21:52:09.589360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.731 [2024-07-24 21:52:09.589532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.731 [2024-07-24 21:52:09.589540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.731 [2024-07-24 21:52:09.589546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.731 [2024-07-24 21:52:09.593413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.731 [2024-07-24 21:52:09.601795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.731 [2024-07-24 21:52:09.602473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.731 [2024-07-24 21:52:09.602517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.731 [2024-07-24 21:52:09.602538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.731 [2024-07-24 21:52:09.603128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.731 [2024-07-24 21:52:09.603473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.731 [2024-07-24 21:52:09.603481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.731 [2024-07-24 21:52:09.603487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.731 [2024-07-24 21:52:09.606206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.731 [2024-07-24 21:52:09.614738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.731 [2024-07-24 21:52:09.615430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.731 [2024-07-24 21:52:09.615479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.731 [2024-07-24 21:52:09.615501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.731 [2024-07-24 21:52:09.616095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.731 [2024-07-24 21:52:09.616423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.731 [2024-07-24 21:52:09.616430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.731 [2024-07-24 21:52:09.616436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.731 [2024-07-24 21:52:09.619118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.731 [2024-07-24 21:52:09.627630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.731 [2024-07-24 21:52:09.628282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.731 [2024-07-24 21:52:09.628324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.731 [2024-07-24 21:52:09.628345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.731 [2024-07-24 21:52:09.628924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.731 [2024-07-24 21:52:09.629501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.731 [2024-07-24 21:52:09.629510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.731 [2024-07-24 21:52:09.629516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.731 [2024-07-24 21:52:09.632165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.731 [2024-07-24 21:52:09.640485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.731 [2024-07-24 21:52:09.641174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.731 [2024-07-24 21:52:09.641216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.731 [2024-07-24 21:52:09.641236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.731 [2024-07-24 21:52:09.641814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.731 [2024-07-24 21:52:09.642406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.731 [2024-07-24 21:52:09.642431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.731 [2024-07-24 21:52:09.642452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.731 [2024-07-24 21:52:09.645383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.731 [2024-07-24 21:52:09.653435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.731 [2024-07-24 21:52:09.654068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.731 [2024-07-24 21:52:09.654109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.731 [2024-07-24 21:52:09.654131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.731 [2024-07-24 21:52:09.654507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.731 [2024-07-24 21:52:09.654682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.731 [2024-07-24 21:52:09.654690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.731 [2024-07-24 21:52:09.654696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.731 [2024-07-24 21:52:09.657381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.731 [2024-07-24 21:52:09.666335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.731 [2024-07-24 21:52:09.667060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.731 [2024-07-24 21:52:09.667102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.731 [2024-07-24 21:52:09.667125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.731 [2024-07-24 21:52:09.667641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.731 [2024-07-24 21:52:09.667814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.731 [2024-07-24 21:52:09.667822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.731 [2024-07-24 21:52:09.667828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.731 [2024-07-24 21:52:09.670549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.731 [2024-07-24 21:52:09.679247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.731 [2024-07-24 21:52:09.679859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.731 [2024-07-24 21:52:09.679875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.731 [2024-07-24 21:52:09.679882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.731 [2024-07-24 21:52:09.680061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.732 [2024-07-24 21:52:09.680233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.732 [2024-07-24 21:52:09.680241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.732 [2024-07-24 21:52:09.680247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.732 [2024-07-24 21:52:09.682995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.732 [2024-07-24 21:52:09.692171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.732 [2024-07-24 21:52:09.692850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.732 [2024-07-24 21:52:09.692891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.732 [2024-07-24 21:52:09.692912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.732 [2024-07-24 21:52:09.693356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.732 [2024-07-24 21:52:09.693529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.732 [2024-07-24 21:52:09.693537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.732 [2024-07-24 21:52:09.693543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.732 [2024-07-24 21:52:09.696246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.732 [2024-07-24 21:52:09.704994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.732 [2024-07-24 21:52:09.705690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.732 [2024-07-24 21:52:09.705732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.732 [2024-07-24 21:52:09.705754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.732 [2024-07-24 21:52:09.706332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.732 [2024-07-24 21:52:09.706505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.732 [2024-07-24 21:52:09.706512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.732 [2024-07-24 21:52:09.706519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.732 [2024-07-24 21:52:09.709239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.732 [2024-07-24 21:52:09.717792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.732 [2024-07-24 21:52:09.718459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.732 [2024-07-24 21:52:09.718503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.732 [2024-07-24 21:52:09.718524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.732 [2024-07-24 21:52:09.718982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.732 [2024-07-24 21:52:09.719171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.732 [2024-07-24 21:52:09.719179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.732 [2024-07-24 21:52:09.719185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.732 [2024-07-24 21:52:09.722013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.732 [2024-07-24 21:52:09.730898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.732 [2024-07-24 21:52:09.731579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.732 [2024-07-24 21:52:09.731621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.732 [2024-07-24 21:52:09.731643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.732 [2024-07-24 21:52:09.732068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.732 [2024-07-24 21:52:09.732241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.732 [2024-07-24 21:52:09.732249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.732 [2024-07-24 21:52:09.732255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.732 [2024-07-24 21:52:09.735045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.732 [2024-07-24 21:52:09.743925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.732 [2024-07-24 21:52:09.744553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.732 [2024-07-24 21:52:09.744594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.732 [2024-07-24 21:52:09.744623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.732 [2024-07-24 21:52:09.744921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.732 [2024-07-24 21:52:09.745116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.732 [2024-07-24 21:52:09.745125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.732 [2024-07-24 21:52:09.745131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.732 [2024-07-24 21:52:09.747827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.732 [2024-07-24 21:52:09.756818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.732 [2024-07-24 21:52:09.757495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.732 [2024-07-24 21:52:09.757537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.732 [2024-07-24 21:52:09.757558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.732 [2024-07-24 21:52:09.758148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.732 [2024-07-24 21:52:09.758661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.732 [2024-07-24 21:52:09.758669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.732 [2024-07-24 21:52:09.758675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.732 [2024-07-24 21:52:09.761391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.732 [2024-07-24 21:52:09.769726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.732 [2024-07-24 21:52:09.770395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.732 [2024-07-24 21:52:09.770411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.732 [2024-07-24 21:52:09.770417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.732 [2024-07-24 21:52:09.770589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.732 [2024-07-24 21:52:09.770761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.732 [2024-07-24 21:52:09.770769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.732 [2024-07-24 21:52:09.770775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.732 [2024-07-24 21:52:09.773477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.732 [2024-07-24 21:52:09.782642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.732 [2024-07-24 21:52:09.783299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.732 [2024-07-24 21:52:09.783341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.732 [2024-07-24 21:52:09.783363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.732 [2024-07-24 21:52:09.783752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.732 [2024-07-24 21:52:09.783915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.732 [2024-07-24 21:52:09.783925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.732 [2024-07-24 21:52:09.783931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.732 [2024-07-24 21:52:09.786627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.732 [2024-07-24 21:52:09.795553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.732 [2024-07-24 21:52:09.796227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.732 [2024-07-24 21:52:09.796274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.732 [2024-07-24 21:52:09.796296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.732 [2024-07-24 21:52:09.796835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.732 [2024-07-24 21:52:09.797006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.732 [2024-07-24 21:52:09.797014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.732 [2024-07-24 21:52:09.797020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.732 [2024-07-24 21:52:09.799712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.732 [2024-07-24 21:52:09.808585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.732 [2024-07-24 21:52:09.809238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.732 [2024-07-24 21:52:09.809254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.732 [2024-07-24 21:52:09.809261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.733 [2024-07-24 21:52:09.809438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.733 [2024-07-24 21:52:09.809616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.733 [2024-07-24 21:52:09.809634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.733 [2024-07-24 21:52:09.809640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.733 [2024-07-24 21:52:09.812374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.733 [2024-07-24 21:52:09.821485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.733 [2024-07-24 21:52:09.822144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.733 [2024-07-24 21:52:09.822187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.733 [2024-07-24 21:52:09.822208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.733 [2024-07-24 21:52:09.822501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.733 [2024-07-24 21:52:09.822663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.733 [2024-07-24 21:52:09.822671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.733 [2024-07-24 21:52:09.822676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.733 [2024-07-24 21:52:09.825372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.733 [2024-07-24 21:52:09.834297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.733 [2024-07-24 21:52:09.834883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.733 [2024-07-24 21:52:09.834924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.733 [2024-07-24 21:52:09.834945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.733 [2024-07-24 21:52:09.835539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.733 [2024-07-24 21:52:09.835966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.733 [2024-07-24 21:52:09.835974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.733 [2024-07-24 21:52:09.835980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.733 [2024-07-24 21:52:09.838755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.998 [2024-07-24 21:52:09.847314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.998 [2024-07-24 21:52:09.847996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.998 [2024-07-24 21:52:09.848036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.998 [2024-07-24 21:52:09.848073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.998 [2024-07-24 21:52:09.848652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.998 [2024-07-24 21:52:09.849073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.998 [2024-07-24 21:52:09.849081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.998 [2024-07-24 21:52:09.849088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.998 [2024-07-24 21:52:09.851871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.998 [2024-07-24 21:52:09.860153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.998 [2024-07-24 21:52:09.860838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.998 [2024-07-24 21:52:09.860880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.998 [2024-07-24 21:52:09.860901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.998 [2024-07-24 21:52:09.861247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.998 [2024-07-24 21:52:09.861419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.998 [2024-07-24 21:52:09.861427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.998 [2024-07-24 21:52:09.861433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.998 [2024-07-24 21:52:09.864121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.998 [2024-07-24 21:52:09.873215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.998 [2024-07-24 21:52:09.873960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.998 [2024-07-24 21:52:09.874001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.998 [2024-07-24 21:52:09.874022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.998 [2024-07-24 21:52:09.874622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.998 [2024-07-24 21:52:09.875036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.998 [2024-07-24 21:52:09.875049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.998 [2024-07-24 21:52:09.875055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.998 [2024-07-24 21:52:09.877790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.998 [2024-07-24 21:52:09.886271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.998 [2024-07-24 21:52:09.886899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.998 [2024-07-24 21:52:09.886943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.998 [2024-07-24 21:52:09.886966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.998 [2024-07-24 21:52:09.887454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.998 [2024-07-24 21:52:09.887628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.998 [2024-07-24 21:52:09.887636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.998 [2024-07-24 21:52:09.887642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.998 [2024-07-24 21:52:09.890423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.998 [2024-07-24 21:52:09.899316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.998 [2024-07-24 21:52:09.900054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.998 [2024-07-24 21:52:09.900097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.998 [2024-07-24 21:52:09.900118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.998 [2024-07-24 21:52:09.900696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.998 [2024-07-24 21:52:09.901036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.998 [2024-07-24 21:52:09.901049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.998 [2024-07-24 21:52:09.901055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.998 [2024-07-24 21:52:09.903738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.998 [2024-07-24 21:52:09.912231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.998 [2024-07-24 21:52:09.912858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.998 [2024-07-24 21:52:09.912900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.998 [2024-07-24 21:52:09.912921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.998 [2024-07-24 21:52:09.913516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.998 [2024-07-24 21:52:09.913769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.998 [2024-07-24 21:52:09.913780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.998 [2024-07-24 21:52:09.913794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.998 [2024-07-24 21:52:09.917864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.998 [2024-07-24 21:52:09.925811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.998 [2024-07-24 21:52:09.926502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.998 [2024-07-24 21:52:09.926546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.998 [2024-07-24 21:52:09.926568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.998 [2024-07-24 21:52:09.927060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.998 [2024-07-24 21:52:09.927233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.998 [2024-07-24 21:52:09.927241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.998 [2024-07-24 21:52:09.927248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3208197 Killed "${NVMF_APP[@]}" "$@" 00:27:01.998 21:52:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:01.998 21:52:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:01.998 [2024-07-24 21:52:09.929996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.998 21:52:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:01.998 21:52:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:01.998 21:52:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.998 21:52:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3209603 00:27:01.998 21:52:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3209603 00:27:01.998 21:52:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:01.998 21:52:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3209603 ']' 00:27:01.998 21:52:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.998 21:52:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:01.998 21:52:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.999 21:52:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:01.999 21:52:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.999 [2024-07-24 21:52:09.938875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.999 [2024-07-24 21:52:09.939318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.999 [2024-07-24 21:52:09.939334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.999 [2024-07-24 21:52:09.939341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.999 [2024-07-24 21:52:09.939518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.999 [2024-07-24 21:52:09.939695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.999 [2024-07-24 21:52:09.939705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.999 [2024-07-24 21:52:09.939714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.999 [2024-07-24 21:52:09.942558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.999 [2024-07-24 21:52:09.951944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.999 [2024-07-24 21:52:09.952654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.999 [2024-07-24 21:52:09.952672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.999 [2024-07-24 21:52:09.952680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.999 [2024-07-24 21:52:09.952860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.999 [2024-07-24 21:52:09.953039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.999 [2024-07-24 21:52:09.953055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.999 [2024-07-24 21:52:09.953061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.999 [2024-07-24 21:52:09.955894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.999 [2024-07-24 21:52:09.965112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.999 [2024-07-24 21:52:09.965740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.999 [2024-07-24 21:52:09.965755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.999 [2024-07-24 21:52:09.965763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.999 [2024-07-24 21:52:09.965939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.999 [2024-07-24 21:52:09.966124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.999 [2024-07-24 21:52:09.966133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.999 [2024-07-24 21:52:09.966140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.999 [2024-07-24 21:52:09.968973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.999 [2024-07-24 21:52:09.978196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.999 [2024-07-24 21:52:09.978798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.999 [2024-07-24 21:52:09.978815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.999 [2024-07-24 21:52:09.978822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.999 [2024-07-24 21:52:09.978999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.999 [2024-07-24 21:52:09.979181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.999 [2024-07-24 21:52:09.979190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.999 [2024-07-24 21:52:09.979196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.999 [2024-07-24 21:52:09.982002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.999 [2024-07-24 21:52:09.984967] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:27:01.999 [2024-07-24 21:52:09.985009] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.999 [2024-07-24 21:52:09.991393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.999 [2024-07-24 21:52:09.992061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.999 [2024-07-24 21:52:09.992078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.999 [2024-07-24 21:52:09.992085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.999 [2024-07-24 21:52:09.992263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.999 [2024-07-24 21:52:09.992441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.999 [2024-07-24 21:52:09.992449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.999 [2024-07-24 21:52:09.992455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.999 [2024-07-24 21:52:09.995292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.999 [2024-07-24 21:52:10.004495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.999 [2024-07-24 21:52:10.005180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.999 [2024-07-24 21:52:10.005197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.999 [2024-07-24 21:52:10.005204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.999 [2024-07-24 21:52:10.005383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.999 [2024-07-24 21:52:10.005561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.999 [2024-07-24 21:52:10.005569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.999 [2024-07-24 21:52:10.005576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.999 [2024-07-24 21:52:10.008414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.999 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.999 [2024-07-24 21:52:10.018283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.999 [2024-07-24 21:52:10.018878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.999 [2024-07-24 21:52:10.018896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.999 [2024-07-24 21:52:10.018904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.999 [2024-07-24 21:52:10.019089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.999 [2024-07-24 21:52:10.019268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.999 [2024-07-24 21:52:10.019276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.999 [2024-07-24 21:52:10.019283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.999 [2024-07-24 21:52:10.022125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.999 [2024-07-24 21:52:10.031499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.999 [2024-07-24 21:52:10.032192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.999 [2024-07-24 21:52:10.032209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.999 [2024-07-24 21:52:10.032216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.999 [2024-07-24 21:52:10.032395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.999 [2024-07-24 21:52:10.032573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.999 [2024-07-24 21:52:10.032581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.999 [2024-07-24 21:52:10.032588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.999 [2024-07-24 21:52:10.035433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.999 [2024-07-24 21:52:10.044647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.999 [2024-07-24 21:52:10.045361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.999 [2024-07-24 21:52:10.045378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:01.999 [2024-07-24 21:52:10.045385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:01.999 [2024-07-24 21:52:10.045562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:01.999 [2024-07-24 21:52:10.045740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.999 [2024-07-24 21:52:10.045748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.000 [2024-07-24 21:52:10.045754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.000 [2024-07-24 21:52:10.048606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.000 [2024-07-24 21:52:10.048986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:02.000 [2024-07-24 21:52:10.057851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.000 [2024-07-24 21:52:10.058314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.000 [2024-07-24 21:52:10.058332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.000 [2024-07-24 21:52:10.058341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.000 [2024-07-24 21:52:10.058518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.000 [2024-07-24 21:52:10.058696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.000 [2024-07-24 21:52:10.058704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.000 [2024-07-24 21:52:10.058711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.000 [2024-07-24 21:52:10.061584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.000 [2024-07-24 21:52:10.071116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.000 [2024-07-24 21:52:10.071574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.000 [2024-07-24 21:52:10.071592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.000 [2024-07-24 21:52:10.071600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.000 [2024-07-24 21:52:10.071783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.000 [2024-07-24 21:52:10.071961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.000 [2024-07-24 21:52:10.071969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.000 [2024-07-24 21:52:10.071975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.000 [2024-07-24 21:52:10.074822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.000 [2024-07-24 21:52:10.084188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.000 [2024-07-24 21:52:10.084884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.000 [2024-07-24 21:52:10.084901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.000 [2024-07-24 21:52:10.084908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.000 [2024-07-24 21:52:10.085091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.000 [2024-07-24 21:52:10.085271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.000 [2024-07-24 21:52:10.085280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.000 [2024-07-24 21:52:10.085286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.000 [2024-07-24 21:52:10.088118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.000 [2024-07-24 21:52:10.097312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.000 [2024-07-24 21:52:10.097933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.000 [2024-07-24 21:52:10.097949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.000 [2024-07-24 21:52:10.097956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.000 [2024-07-24 21:52:10.098139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.000 [2024-07-24 21:52:10.098322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.000 [2024-07-24 21:52:10.098330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.000 [2024-07-24 21:52:10.098337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.000 [2024-07-24 21:52:10.101184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.000 [2024-07-24 21:52:10.110379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.000 [2024-07-24 21:52:10.111069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.000 [2024-07-24 21:52:10.111085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.000 [2024-07-24 21:52:10.111093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.000 [2024-07-24 21:52:10.111272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.000 [2024-07-24 21:52:10.111450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.000 [2024-07-24 21:52:10.111458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.000 [2024-07-24 21:52:10.111469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.261 [2024-07-24 21:52:10.114302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.261 [2024-07-24 21:52:10.123505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.261 [2024-07-24 21:52:10.124212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.261 [2024-07-24 21:52:10.124229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.261 [2024-07-24 21:52:10.124237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.261 [2024-07-24 21:52:10.124416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.261 [2024-07-24 21:52:10.124594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.261 [2024-07-24 21:52:10.124602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.261 [2024-07-24 21:52:10.124609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.261 [2024-07-24 21:52:10.127444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.261 [2024-07-24 21:52:10.136677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.261 [2024-07-24 21:52:10.137284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.261 [2024-07-24 21:52:10.137301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.261 [2024-07-24 21:52:10.137308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.261 [2024-07-24 21:52:10.137486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.261 [2024-07-24 21:52:10.137664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.261 [2024-07-24 21:52:10.137672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.261 [2024-07-24 21:52:10.137679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.261 [2024-07-24 21:52:10.140521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.261 [2024-07-24 21:52:10.142620] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.261 [2024-07-24 21:52:10.142644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.261 [2024-07-24 21:52:10.142651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.261 [2024-07-24 21:52:10.142657] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.261 [2024-07-24 21:52:10.142662] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.261 [2024-07-24 21:52:10.142702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:02.261 [2024-07-24 21:52:10.142808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:02.261 [2024-07-24 21:52:10.142809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.261 [2024-07-24 21:52:10.149736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.261 [2024-07-24 21:52:10.150443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.261 [2024-07-24 21:52:10.150462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.261 [2024-07-24 21:52:10.150470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.261 [2024-07-24 21:52:10.150653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.261 [2024-07-24 21:52:10.150830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.261 [2024-07-24 21:52:10.150839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.261 [2024-07-24 21:52:10.150846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.261 [2024-07-24 21:52:10.153682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.261 [2024-07-24 21:52:10.162898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.261 [2024-07-24 21:52:10.163606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 21:52:10.163625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 21:52:10.163633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 21:52:10.163813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 21:52:10.163992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 21:52:10.164000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 21:52:10.164007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 21:52:10.166845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 21:52:10.176061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 21:52:10.176769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 21:52:10.176788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 21:52:10.176795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 21:52:10.176974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 21:52:10.177160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 21:52:10.177169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 21:52:10.177176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 21:52:10.180006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 21:52:10.189214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 21:52:10.189917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 21:52:10.189935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 21:52:10.189943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 21:52:10.190126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 21:52:10.190305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 21:52:10.190313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 21:52:10.190326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 21:52:10.193163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 21:52:10.202370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 21:52:10.202939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 21:52:10.202956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 21:52:10.202964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 21:52:10.203146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 21:52:10.203329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 21:52:10.203338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 21:52:10.203345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 21:52:10.206177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 21:52:10.215540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 21:52:10.216128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 21:52:10.216145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 21:52:10.216152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 21:52:10.216331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 21:52:10.216507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 21:52:10.216516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 21:52:10.216522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 21:52:10.219357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 21:52:10.228723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 21:52:10.229367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 21:52:10.229383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 21:52:10.229390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 21:52:10.229567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 21:52:10.229745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 21:52:10.229753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 21:52:10.229760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 21:52:10.232597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 21:52:10.241800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 21:52:10.242472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 21:52:10.242487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 21:52:10.242494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 21:52:10.242671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 21:52:10.242848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 21:52:10.242856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 21:52:10.242863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 21:52:10.245701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 21:52:10.254881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 21:52:10.255569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 21:52:10.255585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 21:52:10.255592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 21:52:10.255768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 21:52:10.255945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 21:52:10.255952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 21:52:10.255958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 21:52:10.258795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 21:52:10.267992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 21:52:10.268701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 21:52:10.268718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 21:52:10.268724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 21:52:10.268901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 21:52:10.269083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 21:52:10.269092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 21:52:10.269098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 21:52:10.271934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 21:52:10.281139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 21:52:10.281832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.263 [2024-07-24 21:52:10.281848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.263 [2024-07-24 21:52:10.281854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.263 [2024-07-24 21:52:10.282031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.263 [2024-07-24 21:52:10.282216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.263 [2024-07-24 21:52:10.282225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.263 [2024-07-24 21:52:10.282232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.263 [2024-07-24 21:52:10.285063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.263 [2024-07-24 21:52:10.294244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.263 [2024-07-24 21:52:10.294933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.263 [2024-07-24 21:52:10.294948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.263 [2024-07-24 21:52:10.294955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.263 [2024-07-24 21:52:10.295136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.263 [2024-07-24 21:52:10.295312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.263 [2024-07-24 21:52:10.295320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.263 [2024-07-24 21:52:10.295327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.263 [2024-07-24 21:52:10.298160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.263 [2024-07-24 21:52:10.307371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.263 [2024-07-24 21:52:10.308081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.263 [2024-07-24 21:52:10.308097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.263 [2024-07-24 21:52:10.308104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.263 [2024-07-24 21:52:10.308281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.263 [2024-07-24 21:52:10.308458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.263 [2024-07-24 21:52:10.308466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.263 [2024-07-24 21:52:10.308472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.263 [2024-07-24 21:52:10.311304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.263 [2024-07-24 21:52:10.320508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.263 [2024-07-24 21:52:10.321204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.263 [2024-07-24 21:52:10.321220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.263 [2024-07-24 21:52:10.321227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.263 [2024-07-24 21:52:10.321403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.263 [2024-07-24 21:52:10.321585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.263 [2024-07-24 21:52:10.321593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.263 [2024-07-24 21:52:10.321599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.263 [2024-07-24 21:52:10.324448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.263 [2024-07-24 21:52:10.333647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.263 [2024-07-24 21:52:10.334341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.263 [2024-07-24 21:52:10.334357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.263 [2024-07-24 21:52:10.334364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.263 [2024-07-24 21:52:10.334540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.263 [2024-07-24 21:52:10.334717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.263 [2024-07-24 21:52:10.334725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.263 [2024-07-24 21:52:10.334731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.263 [2024-07-24 21:52:10.337602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.263 [2024-07-24 21:52:10.346787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.263 [2024-07-24 21:52:10.347494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.263 [2024-07-24 21:52:10.347511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.263 [2024-07-24 21:52:10.347518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.263 [2024-07-24 21:52:10.347695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.263 [2024-07-24 21:52:10.347873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.263 [2024-07-24 21:52:10.347881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.263 [2024-07-24 21:52:10.347887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.263 [2024-07-24 21:52:10.350728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.263 [2024-07-24 21:52:10.359930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.263 [2024-07-24 21:52:10.360543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.263 [2024-07-24 21:52:10.360559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.263 [2024-07-24 21:52:10.360566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.263 [2024-07-24 21:52:10.360742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.263 [2024-07-24 21:52:10.360920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.263 [2024-07-24 21:52:10.360928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.263 [2024-07-24 21:52:10.360934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.263 [2024-07-24 21:52:10.363764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.263 [2024-07-24 21:52:10.373140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.263 [2024-07-24 21:52:10.373786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.263 [2024-07-24 21:52:10.373808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.263 [2024-07-24 21:52:10.373815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.263 [2024-07-24 21:52:10.373992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.263 [2024-07-24 21:52:10.374175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.263 [2024-07-24 21:52:10.374183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.263 [2024-07-24 21:52:10.374190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.524 [2024-07-24 21:52:10.377018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.524 [2024-07-24 21:52:10.386208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.524 [2024-07-24 21:52:10.386848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-24 21:52:10.386866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.524 [2024-07-24 21:52:10.386875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.525 [2024-07-24 21:52:10.387058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.525 [2024-07-24 21:52:10.387237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.525 [2024-07-24 21:52:10.387246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.525 [2024-07-24 21:52:10.387252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.525 [2024-07-24 21:52:10.390089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.525 [2024-07-24 21:52:10.399302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.525 [2024-07-24 21:52:10.399970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-24 21:52:10.399987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.525 [2024-07-24 21:52:10.399994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.525 [2024-07-24 21:52:10.400174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.525 [2024-07-24 21:52:10.400352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.525 [2024-07-24 21:52:10.400360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.525 [2024-07-24 21:52:10.400366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.525 [2024-07-24 21:52:10.403206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.525 [2024-07-24 21:52:10.412417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.525 [2024-07-24 21:52:10.413014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-24 21:52:10.413032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.525 [2024-07-24 21:52:10.413039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.525 [2024-07-24 21:52:10.413225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.525 [2024-07-24 21:52:10.413410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.525 [2024-07-24 21:52:10.413419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.525 [2024-07-24 21:52:10.413425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.525 [2024-07-24 21:52:10.416258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.525 [2024-07-24 21:52:10.425621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.525 [2024-07-24 21:52:10.426224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-24 21:52:10.426240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.525 [2024-07-24 21:52:10.426247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.525 [2024-07-24 21:52:10.426425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.525 [2024-07-24 21:52:10.426603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.525 [2024-07-24 21:52:10.426611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.525 [2024-07-24 21:52:10.426617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.525 [2024-07-24 21:52:10.429452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.525 [2024-07-24 21:52:10.438676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.525 [2024-07-24 21:52:10.439288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-24 21:52:10.439305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.525 [2024-07-24 21:52:10.439312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.525 [2024-07-24 21:52:10.439490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.525 [2024-07-24 21:52:10.439667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.525 [2024-07-24 21:52:10.439675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.525 [2024-07-24 21:52:10.439681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.525 [2024-07-24 21:52:10.442515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.525 [2024-07-24 21:52:10.451733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.525 [2024-07-24 21:52:10.452326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-24 21:52:10.452343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.525 [2024-07-24 21:52:10.452350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.525 [2024-07-24 21:52:10.452526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.525 [2024-07-24 21:52:10.452704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.525 [2024-07-24 21:52:10.452712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.525 [2024-07-24 21:52:10.452719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.525 [2024-07-24 21:52:10.455556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.525 [2024-07-24 21:52:10.465135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.525 [2024-07-24 21:52:10.465686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-24 21:52:10.465703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.525 [2024-07-24 21:52:10.465710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.525 [2024-07-24 21:52:10.465888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.525 [2024-07-24 21:52:10.466071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.525 [2024-07-24 21:52:10.466079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.525 [2024-07-24 21:52:10.466086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.525 [2024-07-24 21:52:10.468918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.525 [2024-07-24 21:52:10.478286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.525 [2024-07-24 21:52:10.478912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-24 21:52:10.478929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.525 [2024-07-24 21:52:10.478936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.525 [2024-07-24 21:52:10.479119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.525 [2024-07-24 21:52:10.479296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.525 [2024-07-24 21:52:10.479304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.525 [2024-07-24 21:52:10.479311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.525 [2024-07-24 21:52:10.482145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.525 [2024-07-24 21:52:10.491360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.525 [2024-07-24 21:52:10.491977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-24 21:52:10.491993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.525 [2024-07-24 21:52:10.492000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.525 [2024-07-24 21:52:10.492183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.525 [2024-07-24 21:52:10.492360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.525 [2024-07-24 21:52:10.492369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.525 [2024-07-24 21:52:10.492375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.525 [2024-07-24 21:52:10.495212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.525 [2024-07-24 21:52:10.504429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.525 [2024-07-24 21:52:10.504972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-24 21:52:10.504988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.525 [2024-07-24 21:52:10.504998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.525 [2024-07-24 21:52:10.505180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.525 [2024-07-24 21:52:10.505357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.525 [2024-07-24 21:52:10.505366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.525 [2024-07-24 21:52:10.505373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.525 [2024-07-24 21:52:10.508211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.525 [2024-07-24 21:52:10.517573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.525 [2024-07-24 21:52:10.518216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-24 21:52:10.518233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.526 [2024-07-24 21:52:10.518240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.526 [2024-07-24 21:52:10.518417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.526 [2024-07-24 21:52:10.518595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.526 [2024-07-24 21:52:10.518603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.526 [2024-07-24 21:52:10.518609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.526 [2024-07-24 21:52:10.521445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.526 [2024-07-24 21:52:10.530650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.526 [2024-07-24 21:52:10.531292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-24 21:52:10.531309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.526 [2024-07-24 21:52:10.531316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.526 [2024-07-24 21:52:10.531493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.526 [2024-07-24 21:52:10.531671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.526 [2024-07-24 21:52:10.531679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.526 [2024-07-24 21:52:10.531685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.526 [2024-07-24 21:52:10.534523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.526 [2024-07-24 21:52:10.543726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.526 [2024-07-24 21:52:10.544279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-24 21:52:10.544298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.526 [2024-07-24 21:52:10.544305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.526 [2024-07-24 21:52:10.544483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.526 [2024-07-24 21:52:10.544660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.526 [2024-07-24 21:52:10.544671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.526 [2024-07-24 21:52:10.544677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.526 [2024-07-24 21:52:10.547513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.526 [2024-07-24 21:52:10.556883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.526 [2024-07-24 21:52:10.557529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-24 21:52:10.557545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.526 [2024-07-24 21:52:10.557552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.526 [2024-07-24 21:52:10.557729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.526 [2024-07-24 21:52:10.557906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.526 [2024-07-24 21:52:10.557915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.526 [2024-07-24 21:52:10.557921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.526 [2024-07-24 21:52:10.560757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.526 [2024-07-24 21:52:10.569967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.526 [2024-07-24 21:52:10.570562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-24 21:52:10.570578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.526 [2024-07-24 21:52:10.570585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.526 [2024-07-24 21:52:10.570762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.526 [2024-07-24 21:52:10.570940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.526 [2024-07-24 21:52:10.570948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.526 [2024-07-24 21:52:10.570954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.526 [2024-07-24 21:52:10.573798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.526 [2024-07-24 21:52:10.583179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.526 [2024-07-24 21:52:10.583818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-24 21:52:10.583833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.526 [2024-07-24 21:52:10.583840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.526 [2024-07-24 21:52:10.584017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.526 [2024-07-24 21:52:10.584200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.526 [2024-07-24 21:52:10.584209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.526 [2024-07-24 21:52:10.584215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.526 [2024-07-24 21:52:10.587050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.526 [2024-07-24 21:52:10.596240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.526 [2024-07-24 21:52:10.596889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-24 21:52:10.596905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.526 [2024-07-24 21:52:10.596912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.526 [2024-07-24 21:52:10.597093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.526 [2024-07-24 21:52:10.597272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.526 [2024-07-24 21:52:10.597280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.526 [2024-07-24 21:52:10.597286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.526 [2024-07-24 21:52:10.600127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.526 [2024-07-24 21:52:10.609335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.526 [2024-07-24 21:52:10.609729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-24 21:52:10.609745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.526 [2024-07-24 21:52:10.609752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.526 [2024-07-24 21:52:10.609929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.526 [2024-07-24 21:52:10.610111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.526 [2024-07-24 21:52:10.610120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.526 [2024-07-24 21:52:10.610126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.526 [2024-07-24 21:52:10.612955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.526 [2024-07-24 21:52:10.622498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.526 [2024-07-24 21:52:10.623162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-24 21:52:10.623179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.526 [2024-07-24 21:52:10.623186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.526 [2024-07-24 21:52:10.623363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.526 [2024-07-24 21:52:10.623540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.526 [2024-07-24 21:52:10.623549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.526 [2024-07-24 21:52:10.623555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.526 [2024-07-24 21:52:10.626393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.526 [2024-07-24 21:52:10.635586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.526 [2024-07-24 21:52:10.636183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-24 21:52:10.636199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.526 [2024-07-24 21:52:10.636206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.526 [2024-07-24 21:52:10.636387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.527 [2024-07-24 21:52:10.636564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.527 [2024-07-24 21:52:10.636572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.527 [2024-07-24 21:52:10.636579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.527 [2024-07-24 21:52:10.639415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.788 [2024-07-24 21:52:10.648774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.788 [2024-07-24 21:52:10.649449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.788 [2024-07-24 21:52:10.649467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.788 [2024-07-24 21:52:10.649475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.788 [2024-07-24 21:52:10.649652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.788 [2024-07-24 21:52:10.649830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.788 [2024-07-24 21:52:10.649839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.788 [2024-07-24 21:52:10.649846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.788 [2024-07-24 21:52:10.652682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.788 [2024-07-24 21:52:10.661889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.788 [2024-07-24 21:52:10.662340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.788 [2024-07-24 21:52:10.662356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.788 [2024-07-24 21:52:10.662363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.788 [2024-07-24 21:52:10.662540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.788 [2024-07-24 21:52:10.662717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.788 [2024-07-24 21:52:10.662725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.788 [2024-07-24 21:52:10.662732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.788 [2024-07-24 21:52:10.665567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.788 [2024-07-24 21:52:10.674915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.788 [2024-07-24 21:52:10.675529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.788 [2024-07-24 21:52:10.675544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.788 [2024-07-24 21:52:10.675552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.788 [2024-07-24 21:52:10.675728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.788 [2024-07-24 21:52:10.675906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.788 [2024-07-24 21:52:10.675914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.788 [2024-07-24 21:52:10.675924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.788 [2024-07-24 21:52:10.678762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.788 [2024-07-24 21:52:10.687967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.788 [2024-07-24 21:52:10.688516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.788 [2024-07-24 21:52:10.688533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.788 [2024-07-24 21:52:10.688540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.788 [2024-07-24 21:52:10.688717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.788 [2024-07-24 21:52:10.688895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.788 [2024-07-24 21:52:10.688903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 21:52:10.688910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 21:52:10.691747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 [2024-07-24 21:52:10.701119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 [2024-07-24 21:52:10.701660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 21:52:10.701677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 21:52:10.701684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 21:52:10.701861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 21:52:10.702038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 21:52:10.702050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 21:52:10.702057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 21:52:10.704890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 [2024-07-24 21:52:10.714272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 [2024-07-24 21:52:10.714918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 21:52:10.714934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 21:52:10.714941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 21:52:10.715122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 21:52:10.715299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 21:52:10.715307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 21:52:10.715313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 21:52:10.718152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 [2024-07-24 21:52:10.727369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 [2024-07-24 21:52:10.727973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 21:52:10.727993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 21:52:10.728000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 21:52:10.728182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 21:52:10.728363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 21:52:10.728372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 21:52:10.728378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 21:52:10.731216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 [2024-07-24 21:52:10.740422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 [2024-07-24 21:52:10.741062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 21:52:10.741079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 21:52:10.741087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 21:52:10.741264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 21:52:10.741443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 21:52:10.741452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 21:52:10.741458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 21:52:10.744293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 [2024-07-24 21:52:10.753513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 [2024-07-24 21:52:10.754113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 21:52:10.754130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 21:52:10.754138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 21:52:10.754316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 21:52:10.754494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 21:52:10.754503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 21:52:10.754512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 21:52:10.757352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 [2024-07-24 21:52:10.766710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 [2024-07-24 21:52:10.767114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 21:52:10.767130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 21:52:10.767138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 21:52:10.767315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 21:52:10.767496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 21:52:10.767505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 21:52:10.767511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 21:52:10.770380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 [2024-07-24 21:52:10.779845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 [2024-07-24 21:52:10.780383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 21:52:10.780399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 21:52:10.780408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 21:52:10.780586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 21:52:10.780768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 21:52:10.780777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 21:52:10.780783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 21:52:10.783619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 [2024-07-24 21:52:10.793001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 [2024-07-24 21:52:10.793428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 21:52:10.793445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 21:52:10.793452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 21:52:10.793630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 21:52:10.793808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 21:52:10.793816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 21:52:10.793823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 21:52:10.796656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 [2024-07-24 21:52:10.806183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 [2024-07-24 21:52:10.806853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 21:52:10.806869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 21:52:10.806876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 21:52:10.807058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 21:52:10.807234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 21:52:10.807242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 21:52:10.807249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 21:52:10.810083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:02.789 [2024-07-24 21:52:10.819268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:02.789 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:02.789 [2024-07-24 21:52:10.819968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 21:52:10.819984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 21:52:10.819991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 21:52:10.820172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.789 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:02.790 [2024-07-24 21:52:10.820350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.790 [2024-07-24 21:52:10.820358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.790 [2024-07-24 21:52:10.820364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.790 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:02.790 [2024-07-24 21:52:10.823196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.790 [2024-07-24 21:52:10.832394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.790 [2024-07-24 21:52:10.832864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.790 [2024-07-24 21:52:10.832880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.790 [2024-07-24 21:52:10.832886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.790 [2024-07-24 21:52:10.833067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.790 [2024-07-24 21:52:10.833243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.790 [2024-07-24 21:52:10.833251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.790 [2024-07-24 21:52:10.833257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.790 [2024-07-24 21:52:10.836093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.790 [2024-07-24 21:52:10.845462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.790 [2024-07-24 21:52:10.846056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.790 [2024-07-24 21:52:10.846073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.790 [2024-07-24 21:52:10.846080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.790 [2024-07-24 21:52:10.846258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.790 [2024-07-24 21:52:10.846436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.790 [2024-07-24 21:52:10.846444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.790 [2024-07-24 21:52:10.846451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.790 [2024-07-24 21:52:10.849290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.790 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.790 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:02.790 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.790 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:02.790 [2024-07-24 21:52:10.858126] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.790 [2024-07-24 21:52:10.858665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.790 [2024-07-24 21:52:10.859216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.790 [2024-07-24 21:52:10.859232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.790 [2024-07-24 21:52:10.859239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.790 [2024-07-24 21:52:10.859417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.790 [2024-07-24 21:52:10.859594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.790 [2024-07-24 21:52:10.859602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.790 [2024-07-24 21:52:10.859608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.790 [2024-07-24 21:52:10.862447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.790 [2024-07-24 21:52:10.871806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.790 [2024-07-24 21:52:10.872424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.790 [2024-07-24 21:52:10.872440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.790 [2024-07-24 21:52:10.872447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.790 [2024-07-24 21:52:10.872624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.790 [2024-07-24 21:52:10.872802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.790 [2024-07-24 21:52:10.872810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.790 [2024-07-24 21:52:10.872816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.790 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.790 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:02.790 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.790 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:02.790 [2024-07-24 21:52:10.875659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.790 [2024-07-24 21:52:10.884902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.790 [2024-07-24 21:52:10.885572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.790 [2024-07-24 21:52:10.885589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.790 [2024-07-24 21:52:10.885596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.790 [2024-07-24 21:52:10.885778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.790 [2024-07-24 21:52:10.885955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.790 [2024-07-24 21:52:10.885963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.790 [2024-07-24 21:52:10.885969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.790 [2024-07-24 21:52:10.888803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.790 Malloc0 00:27:02.790 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.790 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:02.790 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.790 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:02.790 [2024-07-24 21:52:10.898007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.790 [2024-07-24 21:52:10.898670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.790 [2024-07-24 21:52:10.898686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:02.790 [2024-07-24 21:52:10.898693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:02.790 [2024-07-24 21:52:10.898871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:02.790 [2024-07-24 21:52:10.899051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.790 [2024-07-24 21:52:10.899060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.790 [2024-07-24 21:52:10.899066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.790 [2024-07-24 21:52:10.901899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.051 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.051 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:03.051 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.051 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:03.051 [2024-07-24 21:52:10.911088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.051 [2024-07-24 21:52:10.911758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.051 [2024-07-24 21:52:10.911774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4980 with addr=10.0.0.2, port=4420 00:27:03.051 [2024-07-24 21:52:10.911781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4980 is same with the state(5) to be set 00:27:03.051 [2024-07-24 21:52:10.911958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4980 (9): Bad file descriptor 00:27:03.051 [2024-07-24 21:52:10.912141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.051 [2024-07-24 21:52:10.912149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.051 [2024-07-24 21:52:10.912156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.051 [2024-07-24 21:52:10.914984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.051 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.051 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:03.051 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.051 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:03.051 [2024-07-24 21:52:10.920002] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.051 [2024-07-24 21:52:10.924187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.051 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.051 21:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3208674 00:27:03.051 [2024-07-24 21:52:10.951096] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:13.041 00:27:13.041 Latency(us) 00:27:13.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.041 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:13.041 Verification LBA range: start 0x0 length 0x4000 00:27:13.041 Nvme1n1 : 15.01 8151.31 31.84 12190.58 0.00 6272.17 1082.77 24276.81 00:27:13.041 =================================================================================================================== 00:27:13.041 Total : 8151.31 31.84 12190.58 0.00 6272.17 1082.77 24276.81 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:13.041 rmmod nvme_tcp 00:27:13.041 rmmod nvme_fabrics 00:27:13.041 rmmod nvme_keyring 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3209603 ']' 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3209603 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3209603 ']' 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3209603 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3209603 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3209603' 00:27:13.041 killing process with pid 3209603 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3209603 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3209603 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.041 21:52:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.017 21:52:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:14.017 00:27:14.017 real 0m25.961s 00:27:14.017 user 1m2.783s 00:27:14.017 sys 0m6.050s 00:27:14.017 21:52:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:14.017 21:52:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.017 ************************************ 00:27:14.017 END TEST nvmf_bdevperf 00:27:14.017 ************************************ 00:27:14.017 21:52:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:14.017 21:52:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:14.017 21:52:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.017 21:52:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.017 ************************************ 00:27:14.017 START TEST nvmf_target_disconnect 00:27:14.017 ************************************ 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:14.017 * Looking for test storage... 00:27:14.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.017 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:14.277 21:52:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.556 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:19.557 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:19.557 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:19.557 Found net devices under 0000:86:00.0: cvl_0_0 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:19.557 Found net devices under 0000:86:00.1: cvl_0_1 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:19.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:27:19.557 00:27:19.557 --- 10.0.0.2 ping statistics --- 00:27:19.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.557 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:27:19.557 00:27:19.557 --- 10.0.0.1 ping statistics --- 00:27:19.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.557 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:19.557 ************************************ 00:27:19.557 START TEST nvmf_target_disconnect_tc1 00:27:19.557 ************************************ 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:19.557 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:19.558 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.558 [2024-07-24 21:52:27.526964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.558 [2024-07-24 21:52:27.527075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca2e60 with addr=10.0.0.2, port=4420 00:27:19.558 [2024-07-24 21:52:27.527130] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:19.558 [2024-07-24 21:52:27.527162] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:19.558 [2024-07-24 21:52:27.527180] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:19.558 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:19.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:19.558 Initializing NVMe Controllers 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:19.558 00:27:19.558 real 0m0.094s 00:27:19.558 user 0m0.041s 00:27:19.558 sys 0m0.051s 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:19.558 ************************************ 00:27:19.558 END TEST nvmf_target_disconnect_tc1 00:27:19.558 ************************************ 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:19.558 ************************************ 00:27:19.558 START TEST nvmf_target_disconnect_tc2 00:27:19.558 ************************************ 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3214558 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3214558 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3214558 ']' 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.558 21:52:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:19.558 [2024-07-24 21:52:27.654699] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:27:19.558 [2024-07-24 21:52:27.654740] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.819 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.819 [2024-07-24 21:52:27.723317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:19.819 [2024-07-24 21:52:27.802939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.819 [2024-07-24 21:52:27.802975] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.819 [2024-07-24 21:52:27.802982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.819 [2024-07-24 21:52:27.802989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.819 [2024-07-24 21:52:27.802994] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.819 [2024-07-24 21:52:27.803113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:19.819 [2024-07-24 21:52:27.803208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:19.819 [2024-07-24 21:52:27.803704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:19.819 [2024-07-24 21:52:27.803704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:20.387 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:20.388 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:20.388 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:20.388 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:20.388 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.388 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.388 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:20.388 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.388 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.647 Malloc0 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.647 [2024-07-24 21:52:28.512556] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.647 [2024-07-24 21:52:28.537558] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3214803 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:20.647 21:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:20.647 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.559 21:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3214558 00:27:22.559 21:52:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 [2024-07-24 21:52:30.563910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Write completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.559 Read completed with error (sct=0, sc=8) 00:27:22.559 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 [2024-07-24 21:52:30.564114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Write completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 Read completed with error (sct=0, sc=8) 00:27:22.560 starting I/O failed 00:27:22.560 [2024-07-24 21:52:30.564335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:22.560 [2024-07-24 21:52:30.564664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.564681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.565092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.565127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.565528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.565561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.566304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.566339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.566783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.566793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.567290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.567322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.567719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.567734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.568096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.568110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.568469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.568500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.568892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.568922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.569372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.569405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.569795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.569825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.570319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.570350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.570802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.570832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.571345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.571376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.571768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.571798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.572305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.572336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.572820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.572849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.573440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.573472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.573933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.573963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.574422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.574453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.560 qpair failed and we were unable to recover it. 00:27:22.560 [2024-07-24 21:52:30.574931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.560 [2024-07-24 21:52:30.574968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.575495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.575527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.575937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.575967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.576420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.576450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.576881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.576896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.577317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.577348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.577787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.577818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.578276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.578307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.578828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.578858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.579396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.579433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.579836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.579850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.580388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.580420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.580899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.580913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.581370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.581384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.581816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.581830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.582322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.582337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.582727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.582741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.583183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.583197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.583630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.583643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.584080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.584113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.584554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.584585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.585142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.585173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.585557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.585587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.586236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.586267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.586711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.586740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.587200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.587232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.587675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.587706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.588138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.588171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.588619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.588648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.589187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.589218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.589709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.589739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.590208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.590240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.590681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.590711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.591268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.591307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.591715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.591729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.592237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.592251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.592738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.592769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.561 [2024-07-24 21:52:30.593231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.561 [2024-07-24 21:52:30.593262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.561 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.593638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.593652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.594113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.594144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.594676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.594712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.595215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.595246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.595788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.595819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.596380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.596412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.596999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.597029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.597451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.597483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.597961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.597991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.598452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.598483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.599056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.599087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.599602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.599632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.600088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.600120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.600568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.600598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.601049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.601064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.601448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.601478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.601882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.601912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.602442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.602473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.603030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.603067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.603533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.603563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.604023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.604038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.604476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.604507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.604983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.605012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.605428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.605459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.605903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.605933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.606442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.606475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.607012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.607049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.607521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.607551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.608113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.608144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.608627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.608658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.609093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.609124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.609517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.609547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.610104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.610136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.610704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.610747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.611280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.611324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.611767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.611798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-07-24 21:52:30.612285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.562 [2024-07-24 21:52:30.612315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.612797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.612827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.613303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.613334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.613777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.613807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.614315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.614346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.614887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.614917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.615362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.615399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.615797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.615827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.616282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.616313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.616753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.616783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.617281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.617312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.617737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.617767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.618260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.618292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.618817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.618847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.619387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.619418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.619958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.619988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.620404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.620435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.620828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.620859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.621311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.621341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.621718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.621759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.622282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.622314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.622763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.622793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.623284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.623316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.623783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.623813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.624291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.624322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.624764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.624795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.625289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.625320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.625719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.625748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.626257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.626289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.626684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.626715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.627164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.627179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.627532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.627546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.628009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.628039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.628459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.628494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.628880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.628910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.629311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.629342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.629850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.629864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.630377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.563 [2024-07-24 21:52:30.630408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-07-24 21:52:30.630881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.630911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.631371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.631403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.631797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.631828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.632355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.632391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.632874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.632888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.633430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.633461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.634003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.634033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.634490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.634534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.635014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.635028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.635474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.635488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.635994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.636025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.636587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.636618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.637126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.637158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.637824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.637854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.638359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.638389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.638845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.638875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.639443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.639475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.640025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.640063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.640542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.640573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.641066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.641097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.641550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.641580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.642123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.642155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.642572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.642602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.643038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.643077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.643524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.643554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.644039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.644089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.644638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.644669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.645216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.645248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.645698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.645728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.646171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.646203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.646646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.564 [2024-07-24 21:52:30.646677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-07-24 21:52:30.647193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.647224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.647677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.647708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.648200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.648257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.648806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.648836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.649359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.649395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.649918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.649949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.650350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.650382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.650941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.650971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.651424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.651455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.651923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.651953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.652502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.652533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.653125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.653139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.653555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.653569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.653990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.654004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.654414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.654428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.654856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.654885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.655439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.655471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.655949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.655979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.656398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.656430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.656825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.656855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.657371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.657402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.657852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.657883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.658339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.658370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.658883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.658914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.659445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.659477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.659960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.659990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.660531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.660562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.661129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.661161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.661612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.661642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.662203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.662234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.662779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.662809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.663398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.663429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.663892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.663922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.664459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.664491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.664955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.664985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.665510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.665541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.666050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.666081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.565 [2024-07-24 21:52:30.666531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.565 [2024-07-24 21:52:30.666562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.565 qpair failed and we were unable to recover it. 00:27:22.566 [2024-07-24 21:52:30.667167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-07-24 21:52:30.667199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-07-24 21:52:30.667735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-07-24 21:52:30.667749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-07-24 21:52:30.668240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-07-24 21:52:30.668273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-07-24 21:52:30.668803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-07-24 21:52:30.668817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-07-24 21:52:30.669346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-07-24 21:52:30.669378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-07-24 21:52:30.669830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-07-24 21:52:30.669860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-07-24 21:52:30.670377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-07-24 21:52:30.670414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-07-24 21:52:30.670871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-07-24 21:52:30.670885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.566 [2024-07-24 21:52:30.671359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.566 [2024-07-24 21:52:30.671374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.566 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.671844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.671876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.672486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.672519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.673028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.673070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.673470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.673500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.674071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.674086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.674451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.674467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.674941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.674955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.675417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.675448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.675853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.675896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.676321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.676337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.676751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.676766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.677257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.677288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.677786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.677831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.678284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.678315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.678857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.678888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.679356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.679389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.679836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.679866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.680460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.680502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.681006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.681036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.681442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.681472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.681937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.681967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.682425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.682457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.682964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.682995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.683474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.683505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.683906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.683920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.684342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.684379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.684846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.684876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.685557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.685587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.686053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.686069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.686501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.686515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.686912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.686942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.687402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.687433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.687847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.687877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.688320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.688353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.688875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.688905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.689485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.689500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.689879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.689909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.690449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.690494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.690962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.690993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.691405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.691438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.691914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.691944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.692463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.692495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.693018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.693056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.693556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.693586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.694141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.694173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.694651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.694681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.695215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.834 [2024-07-24 21:52:30.695247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.834 qpair failed and we were unable to recover it. 00:27:22.834 [2024-07-24 21:52:30.695917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.695947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.696499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.696531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.696992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.697023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.697520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.697551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.698128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.698162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.698720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.698750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.699283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.699314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.699711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.699741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.700222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.700252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.700670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.700684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.701116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.701148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.701598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.701628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.702135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.702151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.702572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.702602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.703149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.703180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.703694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.703724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.704251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.704282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.704694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.704725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.705186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.705218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.705716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.705747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.706314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.706346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.706884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.706915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.707416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.707447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.707927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.707958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.708443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.708475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.708992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.709033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.709457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.709472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.709832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.709862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.710525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.710557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.711054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.711086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.711493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.711528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.712083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.712116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.712621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.712652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.713162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.713194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.713717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.713747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.714271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.714302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.714785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.714817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.715359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.715391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.715926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.715956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.716484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.716534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.717018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.717060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.717461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.717491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.717963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.718006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.718485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.718518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.719077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.719111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.719590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.719620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.720090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.720121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.720628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.720659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.721131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.721163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.721674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.721705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.722217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.722249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.722703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.722735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.723252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.723284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.723743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.723774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.724221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.724236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.724672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.724703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.725171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.725203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.725661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.725691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.835 qpair failed and we were unable to recover it. 00:27:22.835 [2024-07-24 21:52:30.726249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.835 [2024-07-24 21:52:30.726281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.726730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.726761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.727277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.727310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.727746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.727776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.728236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.728268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.728730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.728761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.729286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.729301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.729688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.729719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.730169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.730201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.730614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.730645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.731114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.731145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.731652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.731683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.732166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.732203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.732661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.732693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.733142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.733174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.733633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.733663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.734165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.734180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.734681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.734711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.735242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.735275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.735672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.735702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.736221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.736253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.736750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.736781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.737201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.737232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.737690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.737721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.738175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.738208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.738664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.738694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.739206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.739238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.739700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.739732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.740247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.740279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.740806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.740837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.741355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.741387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.741844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.741874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.742550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.742582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.743125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.743140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.743499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.743530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.744027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.744068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.744552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.744584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.745135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.745168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.745626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.745659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.746148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.746181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.746641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.746673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.747190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.747222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.747677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.747707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.748253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.748285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.748739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.748770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.749214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.749245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.749778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.749808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.750255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.750288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.750755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.750786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.751293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.751327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.751778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.751808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.752287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.752319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.752773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.752810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.753366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.753397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.753919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.753949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.754424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.754455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.754931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.754961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.755423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.755455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.756016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.836 [2024-07-24 21:52:30.756055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.836 qpair failed and we were unable to recover it. 00:27:22.836 [2024-07-24 21:52:30.756543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.756574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.757097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.757129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.757654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.757685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.758192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.758223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.758686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.758716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.759242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.759274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.759676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.759706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.760222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.760255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.760762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.760793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.761205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.761237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.761764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.761795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.762284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.762316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.762712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.762743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.763257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.763272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.763708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.763724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.764147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.764179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.764587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.764617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.765072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.765104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.765609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.765640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.766187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.766231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.766735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.766766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.767341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.767372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.767816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.767847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.768379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.768410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.768924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.768955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.769458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.769490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.769958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.770002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.770493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.770525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.771089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.771121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.771626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.771657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.772171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.772187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.772636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.772666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.773197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.773229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.773731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.773769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.774316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.774350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.774821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.774851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.775549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.775583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.776061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.776093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.776561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.776592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.777064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.777104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.777525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.777540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.778077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.778109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.778592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.778622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.779146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.779178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.779643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.779673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.780229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.780261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.780661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.780692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.781166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.781199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.781582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.781613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.782088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.837 [2024-07-24 21:52:30.782119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.837 qpair failed and we were unable to recover it. 00:27:22.837 [2024-07-24 21:52:30.782651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.782681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.783153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.783185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.783690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.783727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.784220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.784234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.784710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.784741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.785255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.785286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.785805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.785835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.786298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.786329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.786784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.786815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.787280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.787312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.787852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.787883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.788343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.788375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.788771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.788802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.789357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.789389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.789960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.789991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.790573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.790604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.791173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.791205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.791689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.791720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.792265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.792280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.792732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.792762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.793294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.793326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.793741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.793772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.794316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.794348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.794925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.794966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.795479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.795511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.796064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.796097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.796504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.796534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.797064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.797096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.797553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.797585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.798289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.798323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.798884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.798914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.799475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.799508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.800022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.800074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.800486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.800516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.801069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.801100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.801655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.801686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.802210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.802225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.802651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.802665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.803150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.803182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.803586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.803617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.804114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.804146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.804566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.804596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.805096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.805128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.805528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.805544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.806023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.806064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.806560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.806591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.807066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.807098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.807606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.807637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.808185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.808217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.808629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.808659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.809193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.809208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.809587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.809618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.810136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.810174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.810610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.810641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.811193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.811208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.811644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.811674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.812200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.812231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.812668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.812698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.813147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.813191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.813605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.813620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.814163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.814194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.814676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.838 [2024-07-24 21:52:30.814706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.838 qpair failed and we were unable to recover it. 00:27:22.838 [2024-07-24 21:52:30.815129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.815161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.815745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.815781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.816338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.816353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.816852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.816883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.817305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.817320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.817739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.817753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.818189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.818220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.818699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.818730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.820193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.820230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.820624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.820640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.821132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.821166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.821574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.821605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.822134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.822166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.822679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.822695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.823165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.823198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.823726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.823757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.824266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.824298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.824976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.825007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.827008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.827051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.827573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.827591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.828055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.828089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.828863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.828896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.829404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.829436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.829988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.830019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.830482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.830498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.830874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.830888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.831326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.831343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.831775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.831790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.832250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.832266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.832866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.832897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.833359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.833374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.833849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.833880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.834379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.834394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.834821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.834851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.835353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.835384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.835897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.835913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.836340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.836355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.836709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.836723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.837235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.837267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.837741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.837772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.838323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.838357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.838759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.838796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.839260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.839293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.839709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.839739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.840122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.840154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.840662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.840692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.841133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.841148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.841597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.841612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.842049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.842064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.842548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.842578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.843122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.843154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.843624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.843656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.844058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.844090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.844526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.844540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.845065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.845081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.845468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.845499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.845952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.845982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.846545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.846579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.847070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.847103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.847572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.847604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.848118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.848151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.839 qpair failed and we were unable to recover it. 00:27:22.839 [2024-07-24 21:52:30.848608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.839 [2024-07-24 21:52:30.848640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.849100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.849116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.849492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.849507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.849872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.849887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.850317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.850332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.850654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.850669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.851173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.851188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.851662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.851678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.852105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.852120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.852607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.852622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.852980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.852995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.853567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.853583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.854155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.854170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.854529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.854544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.854966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.854980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.855416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.855432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.855785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.855801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.856872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.856904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.857486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.857504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.857947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.857962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.858391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.858411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.858790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.858805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.859170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.859186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.859610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.859625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.860066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.860081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.860445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.860459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.860827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.860841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.861271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.861286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.861714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.861729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.862239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.862254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.862653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.862667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.863096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.863112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.863483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.863497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.863873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.863888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.864250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.864265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.864680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.864694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.865195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.865226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.865645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.865677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.866188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.866221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.866679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.866710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.867242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.867274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.867694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.867726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.868269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.868284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.868668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.868683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.869061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.869076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.869452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.869484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.869888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.869919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e40000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.870435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.870467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.870976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.870989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.871386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.871398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.871768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.871799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.872276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.872289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.872707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.872718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.873211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.873222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.873592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.873622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.874135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.840 [2024-07-24 21:52:30.874167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.840 qpair failed and we were unable to recover it. 00:27:22.840 [2024-07-24 21:52:30.874578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.874607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.875192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.875223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.875624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.875653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.876220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.876251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.876643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.876681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.877172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.877203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.877602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.877631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.878094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.878105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.878451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.878462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.878812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.878823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.879307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.879339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.879777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.879807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.880313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.880345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.880804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.880834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.881337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.881368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.881794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.881824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.882306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.882337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.882819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.882849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.883421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.883460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.883823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.883834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.884270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.884282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.884664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.884675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.885183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.885193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.885608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.885639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.886179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.886210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.886749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.886780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.887257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.887289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.887704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.887733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.888257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.888269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.888630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.888640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.888999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.889029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.889501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.889533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.890005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.890036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.890533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.890563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.891179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.891217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.891634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.891645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.892085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.892097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.892481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.892511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.893087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.893118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.893595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.893625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.894185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.894217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.894951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.894984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.895490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.895521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.896057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.896088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.896569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.896599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.897146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.897178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.897719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.897750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.898271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.898301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.898693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.898724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.899289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.899321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.899771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.899782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.900295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.900327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.900858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.900887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.901444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.901476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.901885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.901914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.902434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.902465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.903083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.903114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.841 [2024-07-24 21:52:30.903670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.841 [2024-07-24 21:52:30.903681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.841 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.904187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.904218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.904675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.904705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.905226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.905258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.905774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.905804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.906251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.906282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.906679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.906710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.907267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.907298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.907864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.907894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.908364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.908395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.908901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.908932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.909485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.909516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.910158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.910188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.910666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.910695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.911148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.911185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.911692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.911722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.912273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.912284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.912694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.912704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.913194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.913228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.913752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.913783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.914279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.914310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.914784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.914815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.915342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.915374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.915833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.915863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.916389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.916420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.916902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.916932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.917456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.917487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.918078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.918110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.918584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.918615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.919349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.919382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.919914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.919945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.920399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.920430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.920880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.920910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.921402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.921433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.922005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.922036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.922467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.922499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.922972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.923004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.923531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.923563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.924061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.924093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.924551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.924583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.924953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.924983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.925401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.925434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.925886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.925919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.926352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.926363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.926734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.926765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.927252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.927283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.927742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.927753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.928186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.928221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.928686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.928716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.929197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.929228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.929802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.929833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.930373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.930405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.930972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.931003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.931517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.931548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.932086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.932124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.932604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.842 [2024-07-24 21:52:30.932635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.842 qpair failed and we were unable to recover it. 00:27:22.842 [2024-07-24 21:52:30.933150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.933181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.933684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.933716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.934236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.934268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.934678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.934708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.935247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.935258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.935691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.935701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.936122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.936154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.936706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.936737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.937226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.937257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.937731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.937763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.938267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.938299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.938694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.938724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.939212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.939243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.939699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.939731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.940228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.940240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.940632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.940662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.941144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.941176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:22.843 [2024-07-24 21:52:30.941651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.843 [2024-07-24 21:52:30.941683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:22.843 qpair failed and we were unable to recover it. 00:27:23.110 [2024-07-24 21:52:30.942144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.942177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.942567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.942598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.943206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.943238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.943769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.943800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.944249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.944281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.944735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.944747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.945148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.945159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.945571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.945582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.946028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.946070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.946546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.946576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.947023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.947064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.947471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.947501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.948031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.948072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.948626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.948656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.949205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.949236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.949690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.949721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.950276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.950308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.950797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.950826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.951281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.951313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.951824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.951834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.952306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.952343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.952812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.952842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.953367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.953398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.953959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.953990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.954472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.954504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.955095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.955125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.955609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.955640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.956149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.956180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.956725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.956755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.957222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.957254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.957784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.957815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.958304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.958335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.958806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.958836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.959358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.959390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.960007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.960037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.960455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.960485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.960992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.111 [2024-07-24 21:52:30.961005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.111 qpair failed and we were unable to recover it. 00:27:23.111 [2024-07-24 21:52:30.961508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.961539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.962086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.962118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.962521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.962532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.962962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.962992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.963527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.963539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.964031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.964070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.964646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.964677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.965233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.965278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.965752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.965782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.966314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.966345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.966808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.966839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.967420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.967451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.967860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.967890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.968278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.968289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.968778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.968808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.969268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.969279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.969709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.969739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.970249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.970280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.970717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.970747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.971277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.971309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.971859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.971888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.972411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.972441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.972888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.972917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.973415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.973468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.973967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.973978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.974402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.974413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.974902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.974932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.975441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.975471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.975987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.976017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.976439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.976470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.977020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.977060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.977619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.977649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.978205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.978237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.978791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.978821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.979395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.979427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.979968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.979998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.112 [2024-07-24 21:52:30.980566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.112 [2024-07-24 21:52:30.980597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.112 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.981059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.981090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.981543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.981577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.982087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.982115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.982623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.982660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.983106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.983117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.983531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.983561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.984127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.984158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.984744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.984774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.985314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.985344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.985847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.985878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.986436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.986467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.986990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.987020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.987593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.987625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.988142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.988174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.988679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.988709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.989300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.989331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.989885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.989915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.990318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.990349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.990896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.990926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.991515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.991547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.992128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.992159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.992698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.992728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.993271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.993309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.993825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.993854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.994428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.994459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.994927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.994958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.995476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.995514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.996081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.996113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.996657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.996687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.997238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.997269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.997832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.997863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.998436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.998468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.998885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.998915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:30.999463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:30.999494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:31.000057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:31.000089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:31.000623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:31.000654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.113 [2024-07-24 21:52:31.001200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.113 [2024-07-24 21:52:31.001232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.113 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.001782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.001812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.002309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.002340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.002807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.002838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.003327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.003358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.003863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.003893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.004353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.004384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.004936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.004966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.005537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.005569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.006088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.006119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.006572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.006602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.007130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.007162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.007703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.007733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.008241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.008273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.008799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.008830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.009357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.009388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.009935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.009965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.010522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.010553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.011094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.011125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.011650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.011679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.012135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.012166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.012691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.012721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.013282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.013316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.013792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.013822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.014381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.014412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.014902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.014932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.015380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.015412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.015913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.015943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.016504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.016535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.017080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.017111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.017632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.017668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.018129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.018161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.018606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.018637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.019082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.019093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.019592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.019622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.020175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.020205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.020768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.020799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.021306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.114 [2024-07-24 21:52:31.021337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.114 qpair failed and we were unable to recover it. 00:27:23.114 [2024-07-24 21:52:31.021874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.021904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.022346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.022378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.022916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.022946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.023521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.023552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.024132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.024163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.024702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.024732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.025269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.025301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.025802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.025814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.026329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.026361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.026859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.026870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.027366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.027397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.027939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.027972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.028462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.028473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.028989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.029019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.029582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.029615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.030106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.030138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.030602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.030638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.031053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.031063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.031555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.031586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.032148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.032180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.032739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.032769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.033275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.033306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.033834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.033864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.034386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.034418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.034946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.034976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.035430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.035462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.036016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.036056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.036610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.036641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.037205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.037237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.037768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.037798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.038347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.115 [2024-07-24 21:52:31.038379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.115 qpair failed and we were unable to recover it. 00:27:23.115 [2024-07-24 21:52:31.038929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.038959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.039482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.039519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.040004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.040035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.040518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.040548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.041016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.041055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.041557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.041587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.042152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.042183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.042690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.042720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.043165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.043196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.043720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.043751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.044300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.044331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.044903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.044934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.045389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.045421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.045950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.045981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.046531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.046563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.047146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.047178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.047693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.047724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.048249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.048281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.048833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.048863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.049342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.049373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.049899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.049928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.050418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.050449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.050861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.050891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.051427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.051459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.051907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.051937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.052396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.052427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.052974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.053004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.053493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.053524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.054007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.054038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.054488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.054518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.055006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.055036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.055486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.055517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.056055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.056087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.056559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.056569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.057065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.057097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.057649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.057680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.116 [2024-07-24 21:52:31.058247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.116 [2024-07-24 21:52:31.058279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.116 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.058802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.058833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.059293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.059324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.059827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.059857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.060379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.060410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.060962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.060998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.061559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.061590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.062142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.062174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.062697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.062728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.063290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.063321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.063807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.063838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.064365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.064407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.064909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.064940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.065465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.065496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.065974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.065985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.066474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.066486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.067000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.067011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.067424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.067435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.067940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.067950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.068348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.068359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.068776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.068786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.069242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.069253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.069748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.069759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.070227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.070259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.070769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.070800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.071311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.071342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.071859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.071890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.072381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.072412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.072854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.072885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.073389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.073420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.073879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.073910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.074447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.074479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.074994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.075005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.075434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.117 [2024-07-24 21:52:31.075445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.117 qpair failed and we were unable to recover it. 00:27:23.117 [2024-07-24 21:52:31.075897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.075908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.076310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.076322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.076667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.076704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.077087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.077118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.077588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.077619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.078143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.078175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.078618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.078629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.079118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.079148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.079627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.079657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.080171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.080183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.080613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.080623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.081121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.081136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.081575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.081587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.082089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.082101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.082425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.082436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.082844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.082874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.083337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.083369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.083900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.083930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.084463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.084493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.085025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.085036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.085510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.085521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.086048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.086059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.086530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.086542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.087068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.087100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.087606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.087637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.088124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.088155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.088658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.088668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.089029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.089040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.089466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.089496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.089997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.090027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.090604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.090635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.091095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.091107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.091623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.118 [2024-07-24 21:52:31.091633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.118 qpair failed and we were unable to recover it. 00:27:23.118 [2024-07-24 21:52:31.092146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.092157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.092567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.092578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.093038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.093055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.093474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.093486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.093894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.093905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.094314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.094325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.094811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.094841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.095525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.095537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.096245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.096280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.096840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.096877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.097362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.097373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.097885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.097896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.098388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.098419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.098941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.098952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.099456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.099468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.099975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.099985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.100441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.100452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.100916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.100926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.101385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.101400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.101863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.101873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.102345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.102356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.102865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.102876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.103316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.103327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.103812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.103823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.104338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.104350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.104700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.104710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.105123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.105134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.105596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.105607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.106063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.106074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.106625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.106637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.107118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.107129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.107539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.107550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.108035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.108053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.108559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.108570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.109079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.109090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.109579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.109590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.110086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.119 [2024-07-24 21:52:31.110098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.119 qpair failed and we were unable to recover it. 00:27:23.119 [2024-07-24 21:52:31.110516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.110526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.110879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.110890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.111301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.111313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.111796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.111807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.112306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.112317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.112755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.112765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.113228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.113240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.113765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.113776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.114279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.114290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.114717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.114727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.115189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.115200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.115603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.115614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.116080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.116091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.116574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.116584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.117086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.117097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.117533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.117563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.118117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.118149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.118696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.118726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.119250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.119281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.119740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.119770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.120225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.120257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.120791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.120827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.121376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.121387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.121888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.121898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.122313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.122323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.122828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.122860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.123441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.120 [2024-07-24 21:52:31.123473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.120 qpair failed and we were unable to recover it. 00:27:23.120 [2024-07-24 21:52:31.123953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.123984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.124549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.124581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.125158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.125189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.125735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.125765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.126290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.126321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.126768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.126799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.127276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.127307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.127755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.127785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.128351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.128385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.128852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.128883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.129466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.129498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.130011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.130041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.130553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.130583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.130964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.130994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.131560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.131592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.132143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.132175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.132663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.132694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.133220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.133251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.133801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.133831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.134352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.134383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.134895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.134925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.135489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.135521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.136064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.136096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.136656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.136687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.137258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.137290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.137845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.137875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.138424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.138456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.138990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.139020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.139512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.139543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.140019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.140057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.140609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.140639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.141209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.141220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.141625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.141635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.142138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.142169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.142741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.142777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.143358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.143389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.143975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.144005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.121 [2024-07-24 21:52:31.144523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.121 [2024-07-24 21:52:31.144553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.121 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.145104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.145136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.145601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.145631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.146159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.146191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.146737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.146767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.147164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.147195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.147749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.147779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.148354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.148385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.148879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.148909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.149442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.149474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.149949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.149980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.150521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.150553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.151100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.151131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.151620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.151650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.152114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.152145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.152677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.152707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.153231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.153242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.153653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.153684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.154207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.154239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.154786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.154817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.155364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.155396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.155879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.155909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.156339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.156350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.156840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.156870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.157451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.157484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.158032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.158072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.158529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.158558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.159053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.159084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.159604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.159634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.160120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.160151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.160682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.160713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.161216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.161248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.161755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.161785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.162297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.162329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.122 qpair failed and we were unable to recover it. 00:27:23.122 [2024-07-24 21:52:31.162846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.122 [2024-07-24 21:52:31.162876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.163404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.163435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.163980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.164010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.164561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.164592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.165098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.165130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.165614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.165645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.166174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.166205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.166754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.166784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.167328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.167339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.167771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.167802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.168351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.168394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.168969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.168999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.169548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.169579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.170087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.170119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.170673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.170704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.171254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.171285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.171857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.171887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.172429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.172460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.172955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.172985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.173446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.173478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.174054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.174085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.174547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.174577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.175087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.175120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.175666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.175696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.176228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.176260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.176794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.176825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.177367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.177398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.177928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.177958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.178487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.178519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.179062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.179094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.179548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.179561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.180077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.180108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.180651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.180681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.181202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.181234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.181759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.181789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.182319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.123 [2024-07-24 21:52:31.182350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.123 qpair failed and we were unable to recover it. 00:27:23.123 [2024-07-24 21:52:31.182875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.182906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.183468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.183500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.184065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.184096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.184600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.184630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.185191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.185222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.185753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.185783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.186333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.186364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.186934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.186964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.187505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.187537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.188089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.188120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.188686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.188716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.189244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.189276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.189823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.189853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.190435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.190467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.190998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.191029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.191617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.191647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.192420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.192454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.192987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.193017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.193553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.193584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.194133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.194166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.194651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.194682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.195213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.195244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.195800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.195830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.196330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.196342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.196757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.196767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.197230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.197240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.197948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.197979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.198534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.198565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.199111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.199143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.199645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.199676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.200231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.200262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.200818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.200848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.201410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.201442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.201977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.202007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.202524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.202561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.203103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.203135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.203656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.203687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.124 qpair failed and we were unable to recover it. 00:27:23.124 [2024-07-24 21:52:31.204205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.124 [2024-07-24 21:52:31.204216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.204621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.204650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.205168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.205200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.205703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.205733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.206254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.206285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.206858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.206888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.207348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.207380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.207906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.207936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.208406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.208438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.208962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.208992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.209545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.209576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.210148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.210179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.210697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.210727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.211273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.211305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.211856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.211886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.212398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.212429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.212991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.213022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.213613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.213644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.214145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.214176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.214727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.214757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.215334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.215366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.215843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.215873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.216402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.216434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.216983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.216994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.217511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.217522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.218069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.218100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.125 [2024-07-24 21:52:31.218632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.125 [2024-07-24 21:52:31.218643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.125 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.219152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.219185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.219694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.219725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.220186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.220198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.220645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.220657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.221059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.221071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.221560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.221571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.222068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.222079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.222579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.222609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.223152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.223184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.223617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.223648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.224127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.224163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.224679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.224709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.225213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.225245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.225768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.225799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.226253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.226284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.226814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.226844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.227368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.227400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.227908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.227938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.228420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.228456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.229011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.229062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.229528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.229558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.230087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.230119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.230670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.230701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.231271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.231303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.231812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.231843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.232377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.232408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.232979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.233010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.233505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.233536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.234014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.397 [2024-07-24 21:52:31.234054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.397 qpair failed and we were unable to recover it. 00:27:23.397 [2024-07-24 21:52:31.234627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.234657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.235190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.235221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.235796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.235826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.236386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.236396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.236904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.236934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.237409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.237441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.237947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.237977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.238515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.238546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.239088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.239120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.239687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.239718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.240290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.240321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.240836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.240866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.241374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.241406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.241942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.241973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.242479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.242510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.242983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.242993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.243468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.243479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.244015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.244053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.244524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.244554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.245110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.245141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.245681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.245712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.246238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.246275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.246791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.246821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.247349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.247380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.247833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.247863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.248415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.248446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.249012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.249059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.249609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.249639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.250145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.250156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.250588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.250618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.251153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.251185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.251732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.251762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.252279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.252310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.252870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.252900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.253384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.253415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.253877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.253908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.254429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.398 [2024-07-24 21:52:31.254460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.398 qpair failed and we were unable to recover it. 00:27:23.398 [2024-07-24 21:52:31.254924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.254954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.255480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.255512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.256065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.256096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.256666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.256696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.257222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.257253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.257802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.257832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.258412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.258443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.258920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.258950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.259345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.259377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.259862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.259893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.260433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.260464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.261021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.261061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.261618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.261649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.262198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.262229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.262731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.262761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.263293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.263305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.263821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.263851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.264433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.264465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.264946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.264984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.265449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.265481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.266024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.266066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.266625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.266657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.267216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.267253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.267731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.267762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.268300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.268314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.268693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.268723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.269176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.269208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.269750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.269780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.270352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.270384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.270957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.270987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.271612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.271643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.272040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.272085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.272616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.272647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.273197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.273230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.273802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.273832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.274354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.274386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.399 qpair failed and we were unable to recover it. 00:27:23.399 [2024-07-24 21:52:31.274933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.399 [2024-07-24 21:52:31.274964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.275426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.275457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.276022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.276063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.276547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.276577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.277089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.277122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.277632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.277662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.278194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.278225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.278787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.278818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.279378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.279410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.279836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.279847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.280323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.280355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.280902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.280932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.281415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.281448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.281980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.282010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.282576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.282607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.283166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.283198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.283686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.283716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.284265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.284277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.284710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.284741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.285196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.285228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.285787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.285817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.286385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.286419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.286998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.287029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.287447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.287479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.288002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.288033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.288618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.288648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.289254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.289286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.289833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.289864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.290338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.290376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.290902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.290934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.291466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.291498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.292065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.292096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.292576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.292607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.292980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.293011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.293549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.293581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.294123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.400 [2024-07-24 21:52:31.294156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.400 qpair failed and we were unable to recover it. 00:27:23.400 [2024-07-24 21:52:31.294688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.294718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.295268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.295301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.295828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.295860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.296433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.296465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.297055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.297086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.297633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.297664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.298133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.298166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.298692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.298723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.299063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.299095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.299631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.299662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.300196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.300229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.300701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.300732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.301179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.301210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.301740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.301771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.302238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.302270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.302725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.302755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.303203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.303234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.303634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.303665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.304225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.304257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.304795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.304827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.305269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.305301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.305737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.305747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.306173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.306206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.306738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.306769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.307277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.307309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.307870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.307901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.308283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.308315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.308745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.308755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.309250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.309281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.309721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.309751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.310263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.401 [2024-07-24 21:52:31.310295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.401 qpair failed and we were unable to recover it. 00:27:23.401 [2024-07-24 21:52:31.310817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.310847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.311349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.311386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.311913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.311924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.312364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.312395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.312847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.312878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.313349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.313380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.313884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.313914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.314464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.314496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.314961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.314991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.315506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.315517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.316001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.316012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.316379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.316390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.316866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.316877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.317236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.317248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.317643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.317653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.318115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.318127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.318568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.318578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.318969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.318980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.319405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.319416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.319841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.319852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.320273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.320284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.320635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.320646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.321061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.321073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.321475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.321485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.321882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.321893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.322321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.322332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.322738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.322768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.323263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.323274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.323739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.323751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.324158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.324169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.324670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.324700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.325216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.325248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.325750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.325761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.326261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.326272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.326616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.326627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.326993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.402 [2024-07-24 21:52:31.327003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.402 qpair failed and we were unable to recover it. 00:27:23.402 [2024-07-24 21:52:31.327402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.327414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.327786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.327797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.328236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.328247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.328611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.328621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.329036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.329072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.329548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.329561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.329996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.330006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.330305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.330316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.330804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.330834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.331382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.331414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.331732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.331743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.332214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.332225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.332654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.332665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.333131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.333162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.333672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.333702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.334209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.334220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.334702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.334713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.335139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.335150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.335558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.335569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.336050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.336061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.336485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.336516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.337015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.337053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.337570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.337581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.338002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.338012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.338495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.338506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.338963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.338974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.339335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.339346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.339757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.339768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.340264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.340275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.340764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.340775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.341124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.341135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.341648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.341658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.342168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.342179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.342590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.342601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.343006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.343016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.343525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.343536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.344028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.403 [2024-07-24 21:52:31.344039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.403 qpair failed and we were unable to recover it. 00:27:23.403 [2024-07-24 21:52:31.344472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.344483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.344972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.344983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.345398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.345409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.345838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.345848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.346241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.346252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.346679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.346689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.347100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.347111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.347585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.347596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.348005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.348018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.348528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.348539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.348970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.348981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.349448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.349460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.349977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.349987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.350509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.350520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.351011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.351022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.351464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.351475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.351937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.351948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.352350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.352361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.352844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.352854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.353368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.353379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.353819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.353830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.354252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.354263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.354753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.354764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.355195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.355207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.355703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.355714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.356207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.356221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.356686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.356717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.357461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.357474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.357907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.357918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.358343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.358354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.358785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.358796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.359288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.359299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.359808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.359838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.360417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.360449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.360930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.360960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.361471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.361503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.404 [2024-07-24 21:52:31.362110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.404 [2024-07-24 21:52:31.362142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.404 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.362605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.362635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.363172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.363204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.363782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.363812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.364279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.364310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.364761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.364791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.365199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.365230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.365697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.365727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.366275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.366306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.366815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.366844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.367349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.367380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.367912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.367942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.368488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.368525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.369014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.369053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.369575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.369605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.370167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.370199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.370765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.370796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.371328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.371360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.371936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.371966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.372501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.372532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.373034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.373073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.373612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.373643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.374172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.374204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.374609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.374639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.375085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.375116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.375574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.375605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.376147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.376190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.376723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.376753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.377229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.377260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.377697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.377727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.378252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.378284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.378827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.378857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.379377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.379407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.379889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.379919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.380436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.380478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.380893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.380923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.381431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.405 [2024-07-24 21:52:31.381462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.405 qpair failed and we were unable to recover it. 00:27:23.405 [2024-07-24 21:52:31.382006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.382036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.382575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.382606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.383093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.383125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.383594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.383624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.384063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.384095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.384621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.384651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.385240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.385251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.385667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.385697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.386095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.386126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.386652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.386682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.387230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.387261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.387756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.387786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.388299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.388330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.388904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.388933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.389504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.389535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.390063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.390100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.390656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.390686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.391199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.391210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.391677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.391706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.392240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.392272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.392733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.392763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.393320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.393351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.393911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.393941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.394491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.394523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.395025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.395066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.395527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.395557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.396023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.396063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.396540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.396569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.397063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.397094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.397673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.397704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.406 [2024-07-24 21:52:31.398237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.406 [2024-07-24 21:52:31.398269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.406 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.398823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.398853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.399265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.399296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.399772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.399803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.400285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.400316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.400881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.400910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.401386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.401417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.401917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.401947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.402460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.402492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.403067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.403099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.403670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.403701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.404252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.404294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.404790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.404821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.405383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.405414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.405954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.405985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.406544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.406575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.407120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.407152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.407719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.407749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.408217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.408248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.408773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.408804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.409329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.409360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.409874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.409904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.410448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.410480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.411000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.411030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.411600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.411632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.412168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.412206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.412709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.412739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.413259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.413290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.413851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.413881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.414425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.414456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.414906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.414936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.415487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.415518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.416077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.416109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.416652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.416682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.417187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.417219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.417731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.417762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.418311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.418342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.407 [2024-07-24 21:52:31.418832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.407 [2024-07-24 21:52:31.418862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.407 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.419389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.419420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.419876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.419907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.420437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.420469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.421024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.421063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.421603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.421633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.422144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.422176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.422647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.422677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.423193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.423224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.423750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.423779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.424251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.424283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.424737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.424768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.425317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.425348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.425839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.425869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.426352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.426384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.426891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.426921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.427458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.427489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.428063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.428095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.428629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.428662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.429165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.429197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.429748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.429779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.430333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.430364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.430818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.430848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.431367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.431398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.431952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.431982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.432527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.432558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.433106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.433137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.433716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.433747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.434165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.434196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.434664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.434694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.435241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.435273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.435847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.435878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.436409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.436440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.436889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.436919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.437472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.437504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.438024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.438077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.438547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.438577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.408 qpair failed and we were unable to recover it. 00:27:23.408 [2024-07-24 21:52:31.439062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.408 [2024-07-24 21:52:31.439093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.439631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.439661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.440051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.440082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.440610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.440640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.441189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.441220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.441777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.441808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.442326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.442337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.442751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.442782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.443220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.443252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.443776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.443805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.444277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.444309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.444861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.444892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.445470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.445501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.446077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.446108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.446666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.446697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.447262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.447294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.447823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.447853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.448438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.448469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.449001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.449036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.449608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.449639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.450195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.450227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.450755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.450785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.451298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.451329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.451904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.451934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.452415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.452446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.452906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.452937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.453471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.453502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.454075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.454106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.454653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.454683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.455206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.455237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.455766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.455797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.456348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.456379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.456950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.456981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.457572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.457603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.458172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.458204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.458715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.458746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.459304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.409 [2024-07-24 21:52:31.459335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.409 qpair failed and we were unable to recover it. 00:27:23.409 [2024-07-24 21:52:31.459903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.459933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.460441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.460472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.460982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.461012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.461554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.461585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.462115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.462127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.462617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.462647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.463195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.463227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.463736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.463747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.464261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.464294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.464846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.464876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.465385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.465417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.465988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.466018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.466598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.466628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.467176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.467209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.467780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.467810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.468370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.468402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.468956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.468986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.469393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.469424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.469955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.469986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.470567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.470599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.471125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.471157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.471617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.471653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.472204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.472235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.472727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.472757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.473224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.473255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.473759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.473790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.474330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.474361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.474866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.474896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.475435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.475466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.475970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.476001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.476531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.476562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.477011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.477041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.477522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.477552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.478082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.478114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.478657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.478688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.479166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.410 [2024-07-24 21:52:31.479177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.410 qpair failed and we were unable to recover it. 00:27:23.410 [2024-07-24 21:52:31.479644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.479674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.480211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.480254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.480757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.480788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.481333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.481364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.481896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.481927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.482478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.482509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.483074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.483106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.483637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.483667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.484139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.484171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.484619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.484649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.485127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.485158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.485664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.485694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.486212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.486244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.486723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.486753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.487226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.487257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.487781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.487811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.488345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.488377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.488889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.488919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.489389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.489420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.489977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.489988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.490443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.490475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.490945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.490975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.491481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.491521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.492024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.492065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.492569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.492599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.493142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.493179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.493709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.493739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.494224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.494255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.494666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.494697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.495257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.495289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.495814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.495845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.411 [2024-07-24 21:52:31.496404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.411 [2024-07-24 21:52:31.496436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.411 qpair failed and we were unable to recover it. 00:27:23.412 [2024-07-24 21:52:31.496834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.412 [2024-07-24 21:52:31.496865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.412 qpair failed and we were unable to recover it. 00:27:23.412 [2024-07-24 21:52:31.497332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.412 [2024-07-24 21:52:31.497363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.412 qpair failed and we were unable to recover it. 00:27:23.412 [2024-07-24 21:52:31.497790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.412 [2024-07-24 21:52:31.497820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.412 qpair failed and we were unable to recover it. 00:27:23.412 [2024-07-24 21:52:31.498265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.412 [2024-07-24 21:52:31.498296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.412 qpair failed and we were unable to recover it. 00:27:23.412 [2024-07-24 21:52:31.498746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.412 [2024-07-24 21:52:31.498759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.412 qpair failed and we were unable to recover it. 00:27:23.412 [2024-07-24 21:52:31.499250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.412 [2024-07-24 21:52:31.499282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.412 qpair failed and we were unable to recover it. 00:27:23.412 [2024-07-24 21:52:31.499756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.412 [2024-07-24 21:52:31.499789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.412 qpair failed and we were unable to recover it. 00:27:23.412 [2024-07-24 21:52:31.500299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.412 [2024-07-24 21:52:31.500331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.412 qpair failed and we were unable to recover it. 00:27:23.412 [2024-07-24 21:52:31.501018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.412 [2024-07-24 21:52:31.501057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.412 qpair failed and we were unable to recover it. 00:27:23.412 [2024-07-24 21:52:31.501523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.412 [2024-07-24 21:52:31.501553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.412 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.501966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.501998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.502566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.502600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.503161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.503193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.503650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.503680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.504232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.504243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.504603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.504614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.505063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.505095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.505599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.505630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.506197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.506229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.506583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.506594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.507086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.507118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.507579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.507610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.508133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.508164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.508694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.508725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.509238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.509270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.509833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.509864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.510330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.510361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.510916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.510947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.511445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.511477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.512059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.512090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.512619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.512649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.513161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.513193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.513656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.513687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.514210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.514247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.514814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.514844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.515361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.515374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.515817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.515847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.516305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.516337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.516733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.516764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.517283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.517314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.517768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.517800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.518378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.518411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.518958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.518989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.519464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.519496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.519953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.519983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.520416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.520427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.520844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.520874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.521430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.521462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.522038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.522087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.522642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.522673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.523386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.523421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.523996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.524027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.524533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.524564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.525100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.525131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.525602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.525632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.526159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.526191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.526721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.684 [2024-07-24 21:52:31.526751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.684 qpair failed and we were unable to recover it. 00:27:23.684 [2024-07-24 21:52:31.527296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.527328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.527895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.527925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.528388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.528399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.528875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.528907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.529433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.529465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.529892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.529903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.530385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.530397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.530865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.530895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.531439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.531471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.531990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.532020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.532559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.532590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.533079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.533110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.533646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.533676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.534152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.534184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.534593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.534624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.535174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.535206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.535687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.535723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.536280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.536312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.536797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.536828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.537379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.537410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.537958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.537987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.538461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.538493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.539008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.539038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.539612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.539643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.540191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.540223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.540749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.540779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.541334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.541366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.541856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.541885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.542414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.542446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.543035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.543076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.543637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.543668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.544125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.544157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.544685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.544715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.545287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.545319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.545792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.545822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.546359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.546391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.546868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.546899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.547423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.547455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.548033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.548074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.548611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.548641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.549180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.549213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.549774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.549805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.550340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.550372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.550945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.550976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.551491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.551523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.552000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.552030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.552613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.552643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.553113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.553145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.553623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.553653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.554103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.554135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.554662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.554692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.555141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.555172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.555642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.555672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.556200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.556231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.556762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.556791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.557186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.557229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.557715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.557728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.558215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.558226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.558639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.558670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.559140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.685 [2024-07-24 21:52:31.559150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.685 qpair failed and we were unable to recover it. 00:27:23.685 [2024-07-24 21:52:31.559569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.559599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.560104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.560136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.560593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.560623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.561080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.561112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.561613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.561624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.562032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.562068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.562366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.562396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.562795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.562825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.563388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.563419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.563873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.563903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.564440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.564472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.564985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.565016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.565469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.565480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.565889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.565900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.566323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.566334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.566765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.566775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.567276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.567308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.567817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.567828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.568289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.568300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.568708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.568719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.569120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.569132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.569549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.569560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.569971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.569981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.570458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.570469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.570819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.570829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.571192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.571203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.571698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.571708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.572053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.572064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.572584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.572594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.573074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.573106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.573490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.573520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.573956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.573966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.574305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.574316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.574744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.574754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.575239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.575249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.575709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.575720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.576198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.576211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.576603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.576614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.577016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.577026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.577369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.577380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.577839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.577850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.578314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.578345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.578867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.578897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.579426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.579438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.579856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.579867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.580284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.580295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.581005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.581037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.581571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.581581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.582067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.582098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.582464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.582474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.582869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.582879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.583284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.583295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.583637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.583648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.584002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.584056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.584576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.584607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.585106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.585138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.585628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.585639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.686 [2024-07-24 21:52:31.586055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.686 [2024-07-24 21:52:31.586067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.686 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.586772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.586805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.587319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.587350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.587804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.587834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.588338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.588370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.588811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.588841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.589365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.589376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.589788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.589799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.590196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.590207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.590626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.590637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.591119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.591150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.591701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.591731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.592139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.592150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.592626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.592656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.593112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.593123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.593512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.593523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.593877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.593888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.594343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.594355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.594758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.594769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.595157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.595170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.595558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.595569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.595924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.595934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.596341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.596352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.596688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.596699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.597104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.597115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.597510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.597521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.598017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.598028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.598435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.598446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.598899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.598909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.599382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.599394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.599817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.599847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.600344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.600354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.600747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.600758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.601211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.601222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.601650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.601660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.602067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.602077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.602480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.602490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.602905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.602935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.603373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.603405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.603895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.603905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.604358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.604368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.604782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.604811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.605334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.605365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.605910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.605941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.606618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.606629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.607030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.607040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.607516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.607555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.607980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.607995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.687 [2024-07-24 21:52:31.608401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.687 [2024-07-24 21:52:31.608416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.687 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.608795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.608810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.609297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.609314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.609727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.609741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.610082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.610096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.610602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.610633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.611142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.611156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.611560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.611574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.611985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.612015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.612479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.612511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.612940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.612982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.613444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.613459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.613899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.613930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.614379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.614410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.614909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.614939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.615377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.615408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.615920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.615951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.616381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.616412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.616912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.616942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.617436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.617450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.617940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.617970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.618429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.618461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.618894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.618908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.619309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.619324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.619666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.619680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.620102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.620140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.620655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.620685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.621132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.621163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.621625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.621654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.622157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.622187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.622634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.622664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.623177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.623191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.623613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.623627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.623967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.623981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.624447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.624478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.624926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.624957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.625401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.625433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.625877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.625907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.626378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.626409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.626928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.626958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.627410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.627441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.627879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.627910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.628302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.628317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.628532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.628546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.628999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.629029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.629562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.629592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.630080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.630112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.630606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.630636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.630879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.630909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.631427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.631458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.631710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.631740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.632238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.632268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.632789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.632825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.633337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.633368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.633882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.633912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.634407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.634437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.634871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.634901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.635343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.688 [2024-07-24 21:52:31.635373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.688 qpair failed and we were unable to recover it. 00:27:23.688 [2024-07-24 21:52:31.635912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.635942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.636377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.636408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.636849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.636879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.637372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.637403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.637853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.637883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.638132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.638146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.638640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.638669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.639184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.639215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.639736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.639767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.639960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.639990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.640476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.640507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.640952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.640982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.641473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.641504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.642022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.642062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.642526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.642556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.643063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.643094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.643627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.643657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.644195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.644227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.644670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.644700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.645204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.645218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.645648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.645678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.646184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.646214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.646528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.646559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.647088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.647118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.647503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.647533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.647976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.648006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.648480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.648512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.648935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.648965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.649407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.649439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.649898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.649928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.650420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.650451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.650969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.650999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.651522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.651553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.652086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.652117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.652654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.652684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.653127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.653164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.653695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.653725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.654254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.654284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.654726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.654757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.655212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.655243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.655751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.655781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.656287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.656301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.656711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.656725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.657234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.657265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.657691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.657721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.658235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.658265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.658692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.658722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.659230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.659244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.659653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.659682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.660202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.660233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.660734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.660764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.661011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.661041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.661474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.661488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.661907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.661921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.662170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.662184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.662667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.662697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.663142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.663172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.663677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.663707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.689 qpair failed and we were unable to recover it. 00:27:23.689 [2024-07-24 21:52:31.664199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.689 [2024-07-24 21:52:31.664230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.664685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.664715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.665171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.665202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.665696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.665726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.666187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.666218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.666660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.666706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.667196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.667227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.667737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.667767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.668288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.668319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.668854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.668883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.669256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.669287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.669783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.669813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.670267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.670299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.670746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.670775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.671202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.671233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.671772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.671802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.672258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.672272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.672747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.672761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.673129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.673161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.673673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.673703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.674147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.674177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.674656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.674670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.675075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.675106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.675554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.675584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.676124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.676154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.676676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.676706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.677218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.677232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.677725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.677755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.678203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.678234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.678723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.678753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.679239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.679270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.679821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.679851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.680429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.680460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.680925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.680955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.681395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.681425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.681884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.681915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.682356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.682387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.682834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.682864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.683380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.683411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.683926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.683956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.684446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.684477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.684900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.684930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.685424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.685454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.685849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.685879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.686391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.686422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.686937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.686973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.687408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.687439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.687928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.687957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.688405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.688449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.688889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.688919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.689178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.689192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.689671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.689685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.690085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.690116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.690 [2024-07-24 21:52:31.690486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-07-24 21:52:31.690517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.690 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.690960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.690989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.691444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.691474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.691932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.691962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.692350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.692380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.692811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.692841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.693360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.693392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.693891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.693921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.694307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.694338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.694835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.694849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.695258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.695288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.695730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.695760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.696259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.696290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.696653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.696666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.696903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.696916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.697372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.697402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.697865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.697894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.698413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.698445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.698781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.698810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.699290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.699304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.699703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.699716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.700175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.700206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.700646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.700676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.701051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.701082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.701597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.701627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.702067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.702097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.702618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.702632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.702994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.703007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.703246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.703260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.703677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.703691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.704093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.704107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.704536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.704565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.705004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.705033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.705465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.705483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.705904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.705917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.706314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.706328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.706806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.706820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.707278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.707308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.707812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.707842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.708226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.708257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.708676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.708690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.709164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.709178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.709545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.709575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.709976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.710005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.710473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.710504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.711013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.711053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.711573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.711603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.712095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.712126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.712564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.712593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.713022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.713035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.713499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.713529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.714041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.714080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.714516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.714546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.715000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.715029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.715548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.715578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.716065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.716095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.716601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.716632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.691 [2024-07-24 21:52:31.717075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-07-24 21:52:31.717105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.691 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.717613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.717643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.718067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.718096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.718582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.718618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.719055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.719086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.719594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.719623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.720136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.720167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.720680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.720709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.721210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.721240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.721711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.721740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.722229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.722260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.722746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.722775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.723287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.723317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.723778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.723809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.724325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.724355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.724862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.724875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.725379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.725409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.725929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.725960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.726423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.726454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.726909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.726938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.727375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.727416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.727834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.727848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.728337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.728368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.728792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.728822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.729262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.729293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.729679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.729708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.730080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.730111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.730622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.730652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.731141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.731167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.731653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.731683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.731973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.732003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.732508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.732539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.732973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.733003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.733453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.733484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.733921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.733955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.734466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.734497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.735011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.735041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.735433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.735463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.735899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.735929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.736362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.736393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.736923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.736937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.737388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.737419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.737858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.737888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.738421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.738452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.738841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.738878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.739341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.739372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.739613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.739626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.740019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.740068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.740526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.740563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.741032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.741050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.741480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.741510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.741971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.742001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.742431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.742462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.742960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.742973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.743435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.743466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.743961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.743991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.744555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.744587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.745108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.745139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.745512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.745542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.692 [2024-07-24 21:52:31.746066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-07-24 21:52:31.746096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.692 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.746472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.746502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.747026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.747067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.747494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.747524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.747967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.747997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.748375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.748389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.748809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.748823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.749300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.749314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.749667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.749697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.750155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.750186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.750630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.750660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.751166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.751180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.751677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.751713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.752232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.752263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.752653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.752683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.753123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.753154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.753671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.753701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.754142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.754172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.754613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.754643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.755073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.755103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.755590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.755621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.756113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.756143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.756663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.756692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.757143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.757174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.757663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.757693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.758070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.758101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.758537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.758551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.758960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.758974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.759453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.759484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.759906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.759937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.760427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.760462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.760915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.760945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.761432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.761464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.761856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.761885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.762374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.762405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.762857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.762887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.763317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.763349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.763834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.763847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.764320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.764350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.764542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.764572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.765098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.765128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.765561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.765592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.766104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.766134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.766553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.766583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.767055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.767086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.767458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.767488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.767913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.767943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.768430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.768461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.768894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.768908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.769413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.769444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.769960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.769990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.770427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.693 [2024-07-24 21:52:31.770459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.693 qpair failed and we were unable to recover it. 00:27:23.693 [2024-07-24 21:52:31.770887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.770918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.771344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.771380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.771825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.771838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.772175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.772208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.772699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.772729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.773161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.773192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.773695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.773709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.774118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.774132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.774564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.774594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.775110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.775140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.775576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.775606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.776038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.776075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.776511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.776541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.776929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.776958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.777481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.777512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.777977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.778007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.778200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.778232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.778654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.778683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.779103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.779133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.779621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.779651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.780093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.780123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.780635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.780665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.781089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.781120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.781583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.781612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.782053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.782084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.782529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.782559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.782929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.782958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.783392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.783423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.783885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.783920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.784362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.784392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.784760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.784789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.785236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.785250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.785677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.785707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.786153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.786184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.786699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.786728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.787124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.787155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.787685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.787698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.694 [2024-07-24 21:52:31.788091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-24 21:52:31.788122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.694 qpair failed and we were unable to recover it. 00:27:23.968 [2024-07-24 21:52:31.788657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.968 [2024-07-24 21:52:31.788688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.968 qpair failed and we were unable to recover it. 00:27:23.968 [2024-07-24 21:52:31.789080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.968 [2024-07-24 21:52:31.789111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.968 qpair failed and we were unable to recover it. 00:27:23.968 [2024-07-24 21:52:31.789619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.968 [2024-07-24 21:52:31.789650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.968 qpair failed and we were unable to recover it. 00:27:23.968 [2024-07-24 21:52:31.790086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.968 [2024-07-24 21:52:31.790116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.968 qpair failed and we were unable to recover it. 00:27:23.968 [2024-07-24 21:52:31.790543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.968 [2024-07-24 21:52:31.790557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.968 qpair failed and we were unable to recover it. 00:27:23.968 [2024-07-24 21:52:31.790963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.790992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.791459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.791490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.791986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.792000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.792422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.792464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.792980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.793010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.793455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.793485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.793863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.793893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.794384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.794415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.794891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.794905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.795381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.795396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.795832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.795845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.796253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.796267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.796621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.796635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.796943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.796956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.797446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.797477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.797991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.798021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.798410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.798440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.798958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.798988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.799436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.799468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.799975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.800004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.800449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.800480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.800848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.800862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.801343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.801357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.801810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.801823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.802223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.802253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.802765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.802794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.803282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.803324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.803779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.803809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.804321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.804362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.804821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.804851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.805169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.805199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.805704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.805733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.806248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.806278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.806766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.806795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.807235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.807265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.807696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.807726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.808146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.808177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.969 qpair failed and we were unable to recover it. 00:27:23.969 [2024-07-24 21:52:31.808617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.969 [2024-07-24 21:52:31.808646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.809151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.809165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.809633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.809663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.810036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.810085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.810580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.810609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.811128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.811158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.811612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.811642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.812081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.812095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.812517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.812531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.812986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.813000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.813397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.813411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.813760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.813773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.814255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.814284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.814724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.814738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.815145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.815160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.815636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.815650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.816057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.816071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.816507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.816520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.816942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.816955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.817371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.817385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.817630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.817643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.817974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.817987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.818436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.818466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.818719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.818748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.819236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.819267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.819699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.819729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.820143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.820157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.820554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.820568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.821021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.821034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.821472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.821502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.821997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.822027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.822409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.822439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.822895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.822909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.823309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.823323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.823724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.823737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.824152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.824166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.824592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.824606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.824999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.825013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.970 qpair failed and we were unable to recover it. 00:27:23.970 [2024-07-24 21:52:31.825470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.970 [2024-07-24 21:52:31.825500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.825991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.826021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.826462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.826492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.826994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.827007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.827411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.827425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.827915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.827945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.828324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.828354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.828733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.828746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.829220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.829234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.829629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.829643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.830009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.830023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.830236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.830250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.830716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.830730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.831122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.831135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.831590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.831604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.832080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.832094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.832521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.832535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.832868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.832898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.833147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.833177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.833611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.833645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.834092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.834106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.834492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.834505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.834936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.834950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.835349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.835363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.835758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.835771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.836176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.836190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.836670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.836684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.837139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.837153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.837568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.837597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.838061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.838091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.838630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.838660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.839145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.839159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.839641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.839670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.840109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.840140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.840597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.840627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.841058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.841089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.841476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.841505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.841947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.841977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.971 qpair failed and we were unable to recover it. 00:27:23.971 [2024-07-24 21:52:31.842418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.971 [2024-07-24 21:52:31.842432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.842890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.842920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.843377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.843408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.843838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.843868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.844328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.844359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.844872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.844902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.845336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.845374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.845852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.845882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.846277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.846308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.846805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.846819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.847244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.847258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.847711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.847725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.847991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.848005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.848398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.848413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.848819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.848833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.849256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.849270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.849554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.849568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.849969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.849982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.850482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.850496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.850904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.850918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.851408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.851423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.851848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.851861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.852080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.852094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.852415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.852429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.852821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.852850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.853268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.853299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.853738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.853767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.854304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.854334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.854610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.854640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.854942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.854971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.855419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.855433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.855830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.855844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.856336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.856350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.856738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.856751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.857229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.857243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.857633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.857646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.858130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.858144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.858437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.972 [2024-07-24 21:52:31.858450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.972 qpair failed and we were unable to recover it. 00:27:23.972 [2024-07-24 21:52:31.858852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.858865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.859343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.859358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.859754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.859767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.860172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.860186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.860587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.860600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.861080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.861094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.861493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.861507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.861962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.861976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.862388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.862419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.862905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.862919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.863373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.863387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.863876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.863892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.864392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.864406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.864803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.864817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.865232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.865246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.865633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.865647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.866127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.866142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.866541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.866555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.866956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.866986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.867493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.867525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.867975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.868006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.868440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.868470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.869129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.869160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.869660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.869690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.870226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.870257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.870803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.870832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.871271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.871302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.871828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.871858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.872340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.872372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.872815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.872844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.873287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.873317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.873758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.973 [2024-07-24 21:52:31.873797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.973 qpair failed and we were unable to recover it. 00:27:23.973 [2024-07-24 21:52:31.874223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.874237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.874641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.874654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.875063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.875078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.875481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.875495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.875892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.875907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.876297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.876311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.876467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.876480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.876959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.876972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.877332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.877346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.877683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.877697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.878088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.878102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.878425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.878439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.878911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.878925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.879332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.879346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.879745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.879758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.880113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.880127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.880534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.880548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.881027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.881040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.881535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.881549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.881792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.881806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.882288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.882305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.882693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.882707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.883139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.883170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.883534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.883564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.884083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.884113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.884602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.884632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.885061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.885091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.885527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.885556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.886018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.886056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.886316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.886345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.886761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.886774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.887109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.887123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.887472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.887502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.888036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.888073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.888463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.888493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.888981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.889010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.889448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.889479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.889906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.974 [2024-07-24 21:52:31.889935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.974 qpair failed and we were unable to recover it. 00:27:23.974 [2024-07-24 21:52:31.890366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.890397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.890838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.890876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.891237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.891268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.891644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.891673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.892162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.892193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.892726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.892755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.893267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.893297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.893785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.893814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.894328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.894359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.894800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.894839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.895276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.895307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.895752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.895781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.896218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.896238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.896700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.896729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.897097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.897127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.897584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.897613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.898130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.898161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.898530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.898560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.899070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.899101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.899492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.899522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.900030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.900068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.900316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.900345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.900836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.900866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.901323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.901355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.901840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.901869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.902246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.902277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.902762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.902792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.903149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.903163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.903646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.903676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.904098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.904129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.904491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.904529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.905010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.905039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.905294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.905324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.905815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.905845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.906268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.906299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.906718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.906748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.907203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.907233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.907687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.975 [2024-07-24 21:52:31.907718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.975 qpair failed and we were unable to recover it. 00:27:23.975 [2024-07-24 21:52:31.908211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.908241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.908679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.908709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.909197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.909227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.909611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.909641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.910097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.910127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.910546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.910575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.911041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.911082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.911595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.911625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.912139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.912169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.912606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.912635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.913074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.913105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.913533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.913564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.914000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.914035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.914444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.914475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.914932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.914962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.915388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.915419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.915848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.915879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.916429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.916460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.916909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.916939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.917265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.917296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.917494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.917524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.917984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.918013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.918395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.918426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.918849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.918879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.919323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.919354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.919840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.919870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.920388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.920419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.920846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.920876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.921390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.921420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.921806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.921837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.922267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.922298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.922809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.922838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.923349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.923380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.923867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.923896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.924334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.924365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.924873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.924903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.925393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.925424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.925932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.976 [2024-07-24 21:52:31.925961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.976 qpair failed and we were unable to recover it. 00:27:23.976 [2024-07-24 21:52:31.926450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.926480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.927000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.927035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.927558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.927588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.927968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.927997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.928499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.928530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.929021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.929060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.929498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.929527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.930014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.930061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.930574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.930604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.931055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.931086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.931574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.931604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.932034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.932074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.932588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.932617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.933057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.933088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.933577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.933607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.934053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.934089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.934545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.934576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.935013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.935027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.935514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.935545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.935984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.936014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.936506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.936520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.936932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.936962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.937356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.937386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.937875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.937905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.938325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.938339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.938823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.938852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.939301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.939332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.939779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.939809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.940257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.940287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.940807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.940837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.941328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.941358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.941847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.941887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.942367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.942398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.942855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.942885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.943375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.943405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.943893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.943922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.944373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.944403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.944913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.977 [2024-07-24 21:52:31.944943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.977 qpair failed and we were unable to recover it. 00:27:23.977 [2024-07-24 21:52:31.945459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.945490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.945944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.945974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.946412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.946443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.946956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.946986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.947450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.947487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.947990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.948019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.948478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.948509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.948882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.948911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.949420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.949451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.949839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.949868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.950373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.950404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.950892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.950923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.951444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.951475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.951902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.951931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.952344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.952375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.952865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.952895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.953403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.953434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.953945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.953974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.954416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.954447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.954723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.954753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.955289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.955319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.955827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.955856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.956340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.956355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.956763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.956792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.957246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.957277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.957791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.957821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.958248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.958279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.958743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.958784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.959206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.959236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.959728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.959767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.960170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.960200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.960703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.960739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.961165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.978 [2024-07-24 21:52:31.961179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.978 qpair failed and we were unable to recover it. 00:27:23.978 [2024-07-24 21:52:31.961661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.961691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.962109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.962139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.962685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.962715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.963258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.963289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.963619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.963648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.964136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.964167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.964626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.964655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.965173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.965204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.965441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.965471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.965903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.965933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.966299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.966313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.966553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.966567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.967028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.967153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.967655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.967686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.968187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.968223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.968525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.968555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.969065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.969096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.969524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.969539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.969946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.969976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.970368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.970399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.970855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.970886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.971381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.971412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.971799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.971829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.972270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.972301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.972736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.972766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.973255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.973286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.973541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.973571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.974033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.974073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.974588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.974618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.975055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.975085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.975402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.975432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.975925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.975956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.976390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.976421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.976808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.976838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.977238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.977269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.977798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.977828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.978196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.978210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.978601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.979 [2024-07-24 21:52:31.978632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.979 qpair failed and we were unable to recover it. 00:27:23.979 [2024-07-24 21:52:31.979139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.979170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.979610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.979646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.979906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.979937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.980364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.980394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.980844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.980874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.981368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.981399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.981862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.981891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.982344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.982374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.982803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.982839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.983295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.983308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.983718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.983749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.984197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.984228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.984649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.984679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.985180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.985211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.985946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.985977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.986472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.986503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.986950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.986979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.987439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.987469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.987894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.987925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.988391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.988421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.988906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.988936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.989304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.989335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.989780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.989810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.990243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.990273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.990901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.990932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.991468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.991499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.991946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.991976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.992475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.992505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.992937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.992967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.993467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.993499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.993936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.993967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.994287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.994303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.994778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.994791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.995202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.995216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.995645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.995675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.996188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.996220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.996611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.996641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.997079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.980 [2024-07-24 21:52:31.997110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.980 qpair failed and we were unable to recover it. 00:27:23.980 [2024-07-24 21:52:31.997541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:31.997571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:31.997938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:31.997967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:31.998456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:31.998487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:31.998852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:31.998882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Write completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Write completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Write completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Write completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Write completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Write completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Write completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Write completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Write completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Write completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Write completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 Read completed with error (sct=0, sc=8) 00:27:23.981 starting I/O failed 00:27:23.981 [2024-07-24 21:52:31.999230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.981 [2024-07-24 21:52:31.999667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:31.999709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.000120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.000136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.000532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.000563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.001005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.001038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.001495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.001523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.002029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.002041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.002406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.002419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.002814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.002826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.003179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.003192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.003536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.003549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.003952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.003965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.004327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.004340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.004755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.004769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.005266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.005281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.005624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.005638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.006048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.006062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.006398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.006411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.006820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.006833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.007254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.007268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.007669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.007683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.008025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.008039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.008507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.008521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.981 [2024-07-24 21:52:32.008891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.981 [2024-07-24 21:52:32.008905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.981 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.009095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.009109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.009453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.009483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.009972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.009986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.010326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.010339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.010980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.011011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.011537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.011568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.011912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.011942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.012344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.012358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.012716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.012746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.013246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.013277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.013699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.013734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.014253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.014267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.014663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.014677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.015028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.015047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.015492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.015506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.016001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.016015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.016435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.016449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.016604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.016617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.017092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.017107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.017516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.017546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.017979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.018020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.018448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.018462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.018863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.018878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.019288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.019318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.019717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.019748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.019995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.020027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.020472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.020502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.020928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.020959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.021396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.021412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.021694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.021708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.022065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.022109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.022532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.982 [2024-07-24 21:52:32.022561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.982 qpair failed and we were unable to recover it. 00:27:23.982 [2024-07-24 21:52:32.023075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.023106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.023468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.023481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.023820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.023834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.024235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.024250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.024668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.024681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.025113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.025145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.025425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.025455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.025891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.025922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.026297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.026328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.026680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.026694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.026850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.026863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.027290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.027304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.027703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.027716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.028055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.028069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.028531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.028545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.028998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.029011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.029428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.029442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.029836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.029850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.030195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.030212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.030357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.030370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.031026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.031039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.031436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.031450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.031808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.031821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.032225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.032240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.032647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.032677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.033056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.033088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.033402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.033416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.033755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.033769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.034193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.034207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.034539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.034553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.034946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.034959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.035416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.035430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.035638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.035679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.036190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.036220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.036588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.036602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.036947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.036961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.037448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.037478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.983 [2024-07-24 21:52:32.037860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.983 [2024-07-24 21:52:32.037890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.983 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.038331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.038361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.038800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.038830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.039345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.039359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.040572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.040600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.041021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.041037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.041448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.041480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.041917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.041948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.042446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.042480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.042917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.042947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.043338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.043368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.043807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.043837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.044350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.044380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.044813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.044843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.045225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.045256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.045688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.045718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.046064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.046096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.046528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.046541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.046925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.046938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.047372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.047403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.047766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.047796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.048244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.048280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.048707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.048720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.049186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.049217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.049580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.049610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.049984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.049997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.050338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.050351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.050672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.050686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.050971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.050999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.051377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.051408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.051841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.051871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.052235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.052266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.052713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.984 [2024-07-24 21:52:32.052742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.984 qpair failed and we were unable to recover it. 00:27:23.984 [2024-07-24 21:52:32.053180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.053211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.053640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.053654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.054084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.054115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.054371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.054400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.054790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.054820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.055248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.055279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.055666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.055696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.056117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.056148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.056581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.056595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.056997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.057011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.057419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.057450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.057836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.057866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.058243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.058274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.058717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.058747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.059128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.059155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.059562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.059593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.060029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.060070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.060457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.060470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.060813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.060843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.061213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.061243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.061617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.061647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.062104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.062134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.062365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.062379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.062733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.062762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.063163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.063193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.063611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.063625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.063975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.063989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.064347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.064378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.064877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.064893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.065221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.065235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.065597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.065611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.066005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.066019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.066371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.066385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.066731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.066744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.067171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.067185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.067590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.067603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.068719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.068747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.985 qpair failed and we were unable to recover it. 00:27:23.985 [2024-07-24 21:52:32.069219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.985 [2024-07-24 21:52:32.069234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:23.986 qpair failed and we were unable to recover it. 00:27:24.259 [2024-07-24 21:52:32.069982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.259 [2024-07-24 21:52:32.070007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.259 qpair failed and we were unable to recover it. 00:27:24.259 [2024-07-24 21:52:32.070499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.070515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.070857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.070872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.071328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.071342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.071529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.071543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.071933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.071963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.072928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.072953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.073301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.073317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.073782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.073812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.074198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.074228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.074600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.074631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.075009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.075039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.075435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.075464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.075841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.075872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.076251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.076288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.076863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.076894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.077329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.077361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.077764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.077795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.078245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.078288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.078856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.078869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.079204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.079219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.079623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.079653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.080094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.080125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.080589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.080619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.081065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.081097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.081562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.081592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.081979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.082008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.082461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.082493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.082929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.082959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.083331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.083346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.083695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.083711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.084115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.084129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.084289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.084302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.084735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.084765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.085200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.085231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.085803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.085832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.086280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.086310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.086753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.260 [2024-07-24 21:52:32.086782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.260 qpair failed and we were unable to recover it. 00:27:24.260 [2024-07-24 21:52:32.087232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.087263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.087651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.087665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.088110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.088144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.088653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.088685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.089123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.089154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.089533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.089547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.090007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.090036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.090542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.090573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.091030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.091070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.091541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.091571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.091954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.091984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.092385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.092416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.092803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.092833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.093279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.093310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.093698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.093711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.093973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.093986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.094331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.094364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.094867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.094897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.095387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.095418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.095859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.095889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.096347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.096378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.096741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.096755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.097112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.097126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.097550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.097580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.098026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.098065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.098583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.098613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.099055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.099086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.099448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.099477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.099912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.099942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.100368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.100398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.101074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.101106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.101541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.101571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.102072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.102109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.102465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.102496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.102933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.102963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.103389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.103420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.103858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.103888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.261 [2024-07-24 21:52:32.104327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.261 [2024-07-24 21:52:32.104373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.261 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.104770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.104800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.105191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.105221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.105660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.105690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.106078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.106110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.106626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.106656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.107023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.107062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.107472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.107502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.107944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.107973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.108412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.108443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.108823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.108837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.109177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.109192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.109683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.109713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.110091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.110123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.110481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.110511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.110941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.110971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.111354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.111385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.111818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.111848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.112230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.112261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.112702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.112715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.113063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.113094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.113529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.113559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.113928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.113963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.114345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.114389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.114816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.114845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.115279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.115310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.115694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.115724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.115995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.116024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.116404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.116434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.116806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.116836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.117213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.117244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.117633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.117663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.118112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.118127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.118594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.118624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.119056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.119086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.119325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.119338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.119499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.119512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.119834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.119849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.262 qpair failed and we were unable to recover it. 00:27:24.262 [2024-07-24 21:52:32.120318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.262 [2024-07-24 21:52:32.120349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.120776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.120805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.121136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.121167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.121564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.121594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.122116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.122130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.122472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.122501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.122870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.122900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.123282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.123312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.123696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.123727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.124102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.124133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.124574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.124604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.125040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.125091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.125448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.125478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.125839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.125869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.126324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.126354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.126843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.126873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.127261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.127292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.127661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.127674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.128018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.128031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.128360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.128374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.128781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.128811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.129260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.129290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.129663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.129693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.130152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.130182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.130567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.130603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.130965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.130995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.131442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.131473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.131856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.131885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.132374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.132404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.132826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.132839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.133192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.133206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.133533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.133546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.133697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.133710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.134113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.134128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.134482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.134512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.134868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.134898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.135269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.135300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.263 [2024-07-24 21:52:32.135682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.263 [2024-07-24 21:52:32.135712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.263 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.136206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.136236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.136624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.136655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.136933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.136962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.137406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.137437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.137870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.137900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.138288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.138318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.138759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.138789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.139171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.139200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.139568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.139582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.139976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.139989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.140392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.140406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.140805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.140835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.141420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.141451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.141825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.141855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.142285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.142316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.142745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.142775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.143153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.143183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.143568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.143598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.143974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.144003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.144430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.144462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.144920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.144951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.145216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.145246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.145680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.145710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.146163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.146195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.146640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.146670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.147180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.147210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.147584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.147619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.147980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.148010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.264 [2024-07-24 21:52:32.148384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.264 [2024-07-24 21:52:32.148414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.264 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.148841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.148870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.149256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.149286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.149719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.149749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.150135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.150166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.150550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.150580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.150937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.150967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.151521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.151551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.151802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.151832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.152215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.152246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.152607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.152637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.153096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.153127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.153507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.153537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.153961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.153975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.154324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.154355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.154848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.154877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.155268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.155298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.155676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.155706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.156076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.156106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.156530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.156544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.156878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.156912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.157307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.157339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.157699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.157729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.158114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.158145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.158569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.158599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.159004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.159034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.159425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.159456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.159903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.159933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.160363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.160393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.160778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.160808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.161243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.161274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.161648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.161677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.162120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.162151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.162424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.162453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.162818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.162848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.163270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.163300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.163674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.163688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.164033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.164071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.265 qpair failed and we were unable to recover it. 00:27:24.265 [2024-07-24 21:52:32.164450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.265 [2024-07-24 21:52:32.164496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.164895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.164908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.165257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.165288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.165668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.165698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.166195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.166226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.166666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.166696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.167078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.167108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.167482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.167512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.167946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.167959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.168425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.168456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.168896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.168925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.169332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.169362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.169876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.169906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.170293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.170323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.170766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.170797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.171182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.171212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.171591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.171621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.172137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.172167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.172620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.172649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.173029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.173090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.173459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.173489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.173935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.173965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.174346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.174377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.174747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.174777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.175154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.175185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.175581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.175610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.175985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.176015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.176414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.176444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.176727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.176757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.177147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.177178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.177555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.177585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.178053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.178084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.178454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.178484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.178916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.178946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.179327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.179358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.179726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.179739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.180091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.180122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.180509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.266 [2024-07-24 21:52:32.180539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.266 qpair failed and we were unable to recover it. 00:27:24.266 [2024-07-24 21:52:32.180974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.180987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.181324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.181338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.181670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.181685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.182162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.182176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.182538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.182569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.182946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.182976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.183363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.183394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.183884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.183914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.184369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.184404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.184801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.184814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.185188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.185218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.185588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.185618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.186057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.186088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.186529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.186559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.186990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.187020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.187466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.187496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.187961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.187991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.188429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.188473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.188819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.188849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.189365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.189396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.189770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.189799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.190225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.190256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.190692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.190721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.191164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.191194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.191564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.191594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.191967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.191997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.192468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.192500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.192947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.192978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.193253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.193284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.193743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.193773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.194232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.194262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.194637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.194666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.195123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.195154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.195590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.195619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.195997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.196027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.196481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.196512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.196973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.197003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.267 [2024-07-24 21:52:32.197495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.267 [2024-07-24 21:52:32.197527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.267 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.197964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.197994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.198374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.198405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.198913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.198943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.199390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.199404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.199810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.199844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.200280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.200311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.200816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.200846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.201230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.201261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.201642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.201672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.202113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.202143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.202426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.202455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.202887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.202917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.203292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.203322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.203685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.203699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.204109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.204140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.204566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.204595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.204968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.204997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.205387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.205418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.205781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.205794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.206202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.206216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.206559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.206590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.207063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.207093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.207534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.207565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.207923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.207954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.208339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.208381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.208815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.208844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.209275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.209306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.209744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.209774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.210221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.210251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.210626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.210655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.210909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.210938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.211402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.211433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.211866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.211880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.212294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.212308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.212651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.212664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.213083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.213115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.213543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.213581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.268 qpair failed and we were unable to recover it. 00:27:24.268 [2024-07-24 21:52:32.213976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.268 [2024-07-24 21:52:32.213990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.214334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.214349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.214748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.214762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.215155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.215169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.215581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.215610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.216038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.216078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.216499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.216529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.216909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.216944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.217388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.217402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.217832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.217862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.218353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.218384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.218758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.218787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.219245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.219275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.219651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.219680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.220188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.220218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.220596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.220626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.221025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.221073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.221445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.221475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.221973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.221986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.222535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.222566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.223017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.223053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.223441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.223471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.223964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.223994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.224434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.224465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.224915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.224945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.225464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.225494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.225938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.225968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.226456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.226487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.226939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.226968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.227387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.227417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.227841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.227855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.228252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.228266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.269 qpair failed and we were unable to recover it. 00:27:24.269 [2024-07-24 21:52:32.228724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.269 [2024-07-24 21:52:32.228737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.229085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.229098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.229559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.229590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.230027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.230067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.230460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.230503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.230838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.230851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.231255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.231270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.231659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.231672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.232093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.232123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.232571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.232601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.233114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.233145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.233530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.233559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.233940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.233969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.234354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.234384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.234819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.234848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.235225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.235261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.235724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.235753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.236139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.236171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.236663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.236694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.237215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.237245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.237619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.237649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.238093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.238123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.238509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.238539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.238913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.238942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.239369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.239400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.239894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.239924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.240351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.240382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.240758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.240797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.241205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.241235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.241617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.241647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.242102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.242117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.242517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.242547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.242991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.243021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.243193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.243207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.243547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.243560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.244052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.244066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.244392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.244405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.270 [2024-07-24 21:52:32.244800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.270 [2024-07-24 21:52:32.244813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.270 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.245210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.245225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.245571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.245615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.247137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.247163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.247585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.247617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.248000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.248033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.248250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.248279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.248658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.248689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.249060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.249074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.249424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.249438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.249831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.249845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.250253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.250268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.250698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.250711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.251141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.251155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.251613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.251626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.252311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.252325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.252776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.252790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.253200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.253214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.253669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.253685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.254086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.254100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.254527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.254540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.254815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.254828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.255281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.255296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.255772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.255785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.256028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.256051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.256468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.256482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.256879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.256892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.257301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.257315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.257739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.257752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.258179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.258194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.258548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.258578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.258951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.258981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.259322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.259353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.259875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.259889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.260253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.260266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.260758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.260772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.261231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.261262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.261705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.261734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.271 [2024-07-24 21:52:32.262227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.271 [2024-07-24 21:52:32.262257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.271 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.262644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.262675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.263069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.263100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.263481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.263511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.263962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.263992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.264696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.264735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.265242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.265274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.265723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.265754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.266272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.266286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.266641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.266654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.267015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.267030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.267423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.267453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.268063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.268094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.268607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.268637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.269018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.269057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.269470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.269483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.269930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.269960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.270395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.270426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.270860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.270890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.272326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.272354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.272553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.272571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.273066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.273098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.273566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.273596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.274027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.274040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.274460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.274490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.274988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.275017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.275402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.275432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.275629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.275643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.275980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.275993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.276398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.276429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.276824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.276854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.277277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.277308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.277728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.277758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.278249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.278280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.278712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.278743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.279127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.279158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.279597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.279627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.280014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.280066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.272 qpair failed and we were unable to recover it. 00:27:24.272 [2024-07-24 21:52:32.280500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.272 [2024-07-24 21:52:32.280531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.280964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.280993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.281456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.281488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.281871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.281901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.282400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.282432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.282899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.282929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.283321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.283351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.283792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.283823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.284258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.284289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.284486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.284516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.284902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.284932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.285320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.285351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.285736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.285766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.286221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.286252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.286774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.286805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.287177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.287191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.287529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.287559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.287942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.287972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.288598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.288631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.289071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.289102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.289466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.289496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.289885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.289916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.290282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.290319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.290943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.290973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.291414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.291455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.291871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.291885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.292305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.292318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.292708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.292721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.293068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.293099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.293493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.293523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.293902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.293932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.273 qpair failed and we were unable to recover it. 00:27:24.273 [2024-07-24 21:52:32.294323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.273 [2024-07-24 21:52:32.294354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.294737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.294767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.295028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.295046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.295671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.295701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.296208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.296240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.296610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.296641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.297018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.297055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.297481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.297510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.297965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.297995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.298379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.298409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.298796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.298826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.299257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.299272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.299611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.299624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.299977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.300007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.300452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.300495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.300855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.300885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.301314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.301345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.301719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.301749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.302080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.302111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.302539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.302570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.302948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.302978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.303343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.303374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.303863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.303893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.304285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.304316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.304695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.304724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.305125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.305139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.305476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.305506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.305876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.305907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.306295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.306326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.306689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.306719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.306984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.306997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.307485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.307521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.307892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.307923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.308346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.308360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.308713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.308742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.309204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.309235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.309608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.309637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.310151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.274 [2024-07-24 21:52:32.310182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.274 qpair failed and we were unable to recover it. 00:27:24.274 [2024-07-24 21:52:32.310629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.310659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.311174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.311205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.311644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.311674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.312188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.312232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.312680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.312710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.313212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.313243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.313696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.313725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.314176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.314190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.314691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.314721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.315157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.315188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.315578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.315607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.316039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.316078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.316521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.316551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.317215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.317248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.317714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.317728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.318230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.318244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.318657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.318671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.319027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.319067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.320334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.320359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.320779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.320793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.321207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.321239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.321628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.321642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.322061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.322075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.322425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.322439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.322871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.322886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.323370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.323384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.323598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.323612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.324310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.324326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.324671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.324684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.325138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.325152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.325640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.325654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.326345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.326360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.326819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.326833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.327180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.327196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.327527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.327540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.327752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.327765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.328165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.328179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.275 [2024-07-24 21:52:32.328520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.275 [2024-07-24 21:52:32.328534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.275 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.328866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.328879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.329229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.329243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.329741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.329754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.330149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.330163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.330560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.330574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.330991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.331004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.331463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.331477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.331646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.331660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.332053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.332067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.332695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.332709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.333062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.333076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.333510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.333523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.333916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.333930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.334282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.334296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.334496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.334509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.334904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.334917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.335255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.335269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.335684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.335699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.336155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.336168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.336593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.336607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.337063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.337077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.337554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.337568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.337981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.337997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.338450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.338464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.338688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.338701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.339103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.339117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.339581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.339595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.340052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.340066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.340534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.340548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.340888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.340901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.341313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.341328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.341562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.341576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.342055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.342069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.342492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.342506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.342852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.342866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.343360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.343374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.343705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.276 [2024-07-24 21:52:32.343719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.276 qpair failed and we were unable to recover it. 00:27:24.276 [2024-07-24 21:52:32.344122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.344136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.344479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.344493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.344896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.344909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.345306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.345320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.345659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.345672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.346082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.346095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.346566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.346580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.346873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.346886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.347365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.347379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.347835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.347849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.348192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.348206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.348594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.348608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.349010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.349023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.349509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.349523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.349995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.350008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.350363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.350377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.350865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.350878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.351230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.351244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.351645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.351658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.352059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.352073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.352560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.352573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.353028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.353045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.353438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.353452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.353851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.353865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.354200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.354214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.354569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.354586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.355066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.355080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.355486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.355499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.355897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.355911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.356378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.356392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.356844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.356857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.357278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.357292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.357791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.357804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.358214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.358228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.358686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.358705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.359109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.359132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.359838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.277 [2024-07-24 21:52:32.359854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.277 qpair failed and we were unable to recover it. 00:27:24.277 [2024-07-24 21:52:32.360216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.278 [2024-07-24 21:52:32.360230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.278 qpair failed and we were unable to recover it. 00:27:24.278 [2024-07-24 21:52:32.360744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.278 [2024-07-24 21:52:32.360757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.278 qpair failed and we were unable to recover it. 00:27:24.278 [2024-07-24 21:52:32.361208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.278 [2024-07-24 21:52:32.361222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.278 qpair failed and we were unable to recover it. 00:27:24.278 [2024-07-24 21:52:32.361673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.278 [2024-07-24 21:52:32.361687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.278 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.361922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.361938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.362298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.362312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.362755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.362769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.363174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.363189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.363601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.363616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.364126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.364141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.364624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.364638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.365227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.365241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.365650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.365664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.366071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.366085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.366490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.366504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.366984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.366998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.367476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.367490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.368186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.368202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.368550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.368564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.369040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.369059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.369468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.369482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.369815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.369828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.370234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.370249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.370727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.370741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.371153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.371167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.371583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.371597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.372071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.372086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.372491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.372505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.372981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.372997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.373349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.373364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.373818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.373831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.374314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.374329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.374653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.374666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.559 qpair failed and we were unable to recover it. 00:27:24.559 [2024-07-24 21:52:32.375079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.559 [2024-07-24 21:52:32.375093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.375547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.375561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.375964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.375978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.376492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.376506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.376855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.376868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.377221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.377236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.377587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.377601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.377757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.377770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.378246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.378260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.378658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.378672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.379074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.379088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.379547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.379560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.379893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.379906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.380255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.380270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.380598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.380612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.381091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.381105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.381441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.381455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.381875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.381888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.382366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.382379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.382794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.382808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.383263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.383277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.383757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.383771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.384254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.384268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.384663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.384678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.385094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.385110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.385520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.385534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.385864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.385878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.386293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.386307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.386730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.386744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.387071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.387085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.387432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.387446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.387926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.387940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.388344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.388358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.388729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.388743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.560 [2024-07-24 21:52:32.389220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.560 [2024-07-24 21:52:32.389234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.560 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.389691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.389707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.390105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.390120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.390545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.390559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.390966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.390980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.391456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.391470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.391865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.391880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.392335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.392350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.392703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.392717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.392957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.392971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.393378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.393393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.393891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.393905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.394271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.394285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.394743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.394757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.395157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.395171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.395533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.395548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.395952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.395966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.396274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.396288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.396501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.396514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.396875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.396889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.397397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.397411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.397810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.397824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.398244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.398258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.398595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.398609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.399036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.399056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.399469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.399483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.399934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.399948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.400407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.400421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.400851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.400865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.401290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.401304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.401706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.401736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.402172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.402203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.402715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.402728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.403200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.403214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.403667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.561 [2024-07-24 21:52:32.403681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.561 qpair failed and we were unable to recover it. 00:27:24.561 [2024-07-24 21:52:32.404114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.404145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.404548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.404583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.405038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.405055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.405513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.405526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.405934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.405964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.406422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.406452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.406874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.406910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.407361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.407375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.407778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.407792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.408194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.408208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.408664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.408678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.409152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.409166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.409621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.409635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.410041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.410060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.410560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.410590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.411032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.411070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.411583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.411613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.412079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.412110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.412570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.412600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.413025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.413062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.413503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.413517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.413665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.413679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.414179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.414209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.414695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.414724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.415025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.415064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.415553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.415582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.416023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.416063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.416588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.416619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.417154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.417185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.417609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.417639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.417863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.417892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.418269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.418283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.418685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.418698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.418914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.418927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.562 qpair failed and we were unable to recover it. 00:27:24.562 [2024-07-24 21:52:32.419350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.562 [2024-07-24 21:52:32.419380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.419801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.419842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.420319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.420349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.420804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.420833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.421268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.421298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.421736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.421766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.422255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.422286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.422805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.422836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.423277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.423307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.423798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.423827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.424098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.424128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.424523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.424553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.424992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.425027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.425469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.425499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.426024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.426075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.426543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.426572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.426950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.426980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.427419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.427450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.427912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.427941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.428379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.428409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.428849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.428879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.429333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.429364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.429874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.429904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.430230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.430260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.430647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.430677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.431167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.431198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.431645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.431675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.432181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.432212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.432645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.432675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.433186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.433216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.433651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.433680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.434145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.434176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.434616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.434645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.435081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.435112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.435547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.563 [2024-07-24 21:52:32.435576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.563 qpair failed and we were unable to recover it. 00:27:24.563 [2024-07-24 21:52:32.436015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.436059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.436514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.436527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.437029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.437067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.437266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.437296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.437687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.437717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.438227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.438258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.438685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.438715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.439197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.439211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.439539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.439553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.440005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.440019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.440428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.440441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.440846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.440860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.441291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.441321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.441833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.441863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.442379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.442410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.442788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.442817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.443245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.443276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.443716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.443750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.444190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.444203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.444663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.444693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.445122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.445135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.445606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.445636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.446070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.446100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.446547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.446577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.447096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.447127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.447467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.447497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.447919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.447949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.448463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.564 [2024-07-24 21:52:32.448493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.564 qpair failed and we were unable to recover it. 00:27:24.564 [2024-07-24 21:52:32.448921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.448950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.449387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.449417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.449787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.449817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.450258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.450289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.450805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.450834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.451322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.451336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.451704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.451734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.452169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.452199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.452623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.452653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.453098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.453128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.453653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.453667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.454126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.454140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.454604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.454633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.455093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.455123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.455574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.455603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.456060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.456091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.456344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.456374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.456856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.456885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.457339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.457369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.457883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.457913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.458355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.458386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.458828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.458858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.459299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.459330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.459765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.459795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.460316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.460347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.460526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.460555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.461005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.461034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.461480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.461511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.461874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.461904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.462342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.462377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.462805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.462834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.463324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.463355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.463847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.463877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.464320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.565 [2024-07-24 21:52:32.464351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.565 qpair failed and we were unable to recover it. 00:27:24.565 [2024-07-24 21:52:32.464742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.464772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.465206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.465237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.465722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.465752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.466184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.466214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.466599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.466629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.467142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.467180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.467594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.467624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.467857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.467887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.468257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.468288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.468670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.468700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.469140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.469170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.469610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.469640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.470143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.470174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.470600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.470630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.471011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.471041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.471541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.471572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.471999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.472028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.472505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.472535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.473038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.473079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.473473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.473503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.473878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.473908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.474425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.474455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.474885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.474916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.475420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.475451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.475966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.475995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.476264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.476278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.476758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.476787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.477206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.477242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.477646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.477659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.478082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.478114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.478560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.478591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.479099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.479130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.479593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.479624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.479871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.479900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.566 [2024-07-24 21:52:32.480326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.566 [2024-07-24 21:52:32.480356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.566 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.480847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.480882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.481135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.481166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.481606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.481636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.482069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.482100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.482524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.482538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.482957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.482987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.483481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.483512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.483970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.484000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.484250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.484264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.484750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.484780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.485270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.485300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.485786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.485800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.486210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.486242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.486669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.486699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.487194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.487225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.487658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.487688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.488020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.488057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.488575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.488605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.489112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.489142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.489634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.489664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.489855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.489885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.490318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.490348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.490859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.490889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.491411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.491441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.491906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.491936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.492376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.492406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.492905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.492936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.493458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.493489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.493865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.493895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.494353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.494384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.494874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.494903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.495393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.495423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.495849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.495879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.496391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.496432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.567 qpair failed and we were unable to recover it. 00:27:24.567 [2024-07-24 21:52:32.496911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.567 [2024-07-24 21:52:32.496940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.497307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.497337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.497776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.497806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.498063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.498094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.498530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.498560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.499087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.499118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.499561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.499597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.500107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.500137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.500575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.500605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.501093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.501123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.501557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.501587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.502074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.502105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.502595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.502624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.503153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.503167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.503588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.503618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.504057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.504088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.504538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.504567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.505061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.505092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.505606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.505636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.506140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.506154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.506511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.506542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.507063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.507094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.507607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.507637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.508007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.508021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.508509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.508540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.509030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.509070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.509587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.509617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.510107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.510139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.510469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.510498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.510959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.510989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.511505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.511536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.511913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.511942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.512457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.512488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.513006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.513035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.568 qpair failed and we were unable to recover it. 00:27:24.568 [2024-07-24 21:52:32.513488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.568 [2024-07-24 21:52:32.513519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.514029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.514068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.514385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.514414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.514850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.514880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.515392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.515423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.515858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.515889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.516325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.516356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.516871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.516901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.517267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.517297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.517787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.517816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.518239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.518269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.518783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.518813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.519261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.519297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.519739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.519770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.520154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.520201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.520566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.520579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.520985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.520999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.521419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.521450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.521843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.521873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.522383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.522413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.522855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.522885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.523395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.523426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.523852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.523882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.524251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.524281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.524709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.524722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.525065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.525079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.525571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.525585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.525935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.525949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.569 [2024-07-24 21:52:32.526301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.569 [2024-07-24 21:52:32.526339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.569 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.526852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.526882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.527134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.527165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.527618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.527647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.528092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.528123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.528611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.528641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.528969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.529006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.529489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.529519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.530033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.530081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.530350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.530380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.530818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.530848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.531364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.531396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.531908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.531938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.532382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.532412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.532898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.532928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.533393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.533424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.533880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.533910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.534422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.534453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.534959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.534989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.535210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.535225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.535640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.535654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.535885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.535915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.536452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.536483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.536956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.536986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.537325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.537360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.537854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.537867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.538281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.538312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.538755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.538786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.539210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.539241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.539756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.539787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.540227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.540258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.540668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.540681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.541022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.541035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.541541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.541572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.542072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.570 [2024-07-24 21:52:32.542102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.570 qpair failed and we were unable to recover it. 00:27:24.570 [2024-07-24 21:52:32.542592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.542622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.543145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.543176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.543571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.543601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.544119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.544150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.544552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.544582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.545108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.545123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.545606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.545635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.546082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.546113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.546602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.546632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.547121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.547157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.547553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.547583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.548098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.548128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.548619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.548649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.549094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.549125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.549560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.549573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.549940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.549969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.550466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.550496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.550961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.550992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.551529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.551561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.551990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.552020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.552265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.552296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.552738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.552768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.553300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.553331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.553821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.553852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.554285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.554317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.554843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.554873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.555307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.555338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.555856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.555887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.556392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.556422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.556925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.556960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.557472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.557503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 [2024-07-24 21:52:32.558022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.558061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3214558 Killed "${NVMF_APP[@]}" "$@" 00:27:24.571 [2024-07-24 21:52:32.558575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.558606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 21:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:24.571 [2024-07-24 21:52:32.559135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.571 [2024-07-24 21:52:32.559151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.571 qpair failed and we were unable to recover it. 00:27:24.571 21:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:24.571 21:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:24.572 [2024-07-24 21:52:32.559623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.559653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 21:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:24.572 [2024-07-24 21:52:32.560107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.560139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 21:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.560391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.560421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.560795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.560825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.561349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.561381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.561709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.561722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.562122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.562136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.562615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.562646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.563029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.563068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.563561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.563592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.564094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.564126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.564629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.564660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.565113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.565145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.565533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.565563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.565997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.566028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.566505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.566535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 21:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3215493 00:27:24.572 21:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3215493 00:27:24.572 [2024-07-24 21:52:32.567052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.567085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 21:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:24.572 21:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3215493 ']' 00:27:24.572 [2024-07-24 21:52:32.567597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.567631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 21:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.572 [2024-07-24 21:52:32.567886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.567917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 21:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:24.572 21:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.572 [2024-07-24 21:52:32.568362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.568394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 21:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:24.572 [2024-07-24 21:52:32.568775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.568791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 21:52:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.572 [2024-07-24 21:52:32.569253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.569269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.569683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.569697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.570058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.570073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.570554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.570569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.570991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.571005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.571403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.571418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.571762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.571776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.572178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.572 [2024-07-24 21:52:32.572192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.572 qpair failed and we were unable to recover it. 00:27:24.572 [2024-07-24 21:52:32.572595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.572609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.573023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.573036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.573526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.573541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.573957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.573970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.574370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.574384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.574805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.574820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.575033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.575050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.575707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.575723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.575912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.575926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.576337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.576351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.576698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.576712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.577165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.577179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.577582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.577598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.577937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.577950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.578395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.578409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.578873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.578886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.579309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.579323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.579652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.579666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.580074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.580088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.580565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.580578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.581032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.581061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.581481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.581495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.582187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.582200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.582690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.582704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.583180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.583194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.583653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.583667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.584072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.584092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.584443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.584457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.585103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.585117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.585558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.585572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.585925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.585939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.586341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.586355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.586508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.586522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.573 [2024-07-24 21:52:32.586928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.573 [2024-07-24 21:52:32.586942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.573 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.587360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.587373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.587620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.587634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.588029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.588048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.588513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.588528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.588878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.588892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.589239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.589253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.589655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.589669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.590174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.590189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.590888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.590903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.591329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.591344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.591748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.591762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.592157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.592171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.592636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.592650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.593111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.593125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.593580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.593594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.594017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.594031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.594328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.594342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.594798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.594812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.595209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.595226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.595479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.595494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.595887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.595901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.596263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.596277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.596695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.596709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.597186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.597201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.597596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.597610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.598088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.598103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.598408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.598422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.598773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.598787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.574 [2024-07-24 21:52:32.599188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.574 [2024-07-24 21:52:32.599202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.574 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.599536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.599550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.600007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.600022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.600353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.600368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.600659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.600673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.601070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.601084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.601503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.601516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.601901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.601915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.602322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.602336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.602730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.602743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.603151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.603165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.603516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.603530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.603953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.603967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.604365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.604379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.604833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.604847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.605312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.605326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.605718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.605732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.605951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.605965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.606365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.606379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.606763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.606777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.607114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.607129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.607532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.607545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.607959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.607973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.608360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.608374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.608712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.608726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.609037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.609055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.609454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.609468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.609823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.609837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.610360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.610374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.610769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.610782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.611184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.611198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.611536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.611550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.611764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.611777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.611989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.612002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.612401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.612415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.575 qpair failed and we were unable to recover it. 00:27:24.575 [2024-07-24 21:52:32.612763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.575 [2024-07-24 21:52:32.612777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.613172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.613187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.613614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.613628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.614030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.614049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.614465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.614479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.614841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.614854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.615260] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:27:24.576 [2024-07-24 21:52:32.615298] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.576 [2024-07-24 21:52:32.615322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.615336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.615791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.615804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.616280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.616293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.616750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.616764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.617189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.617203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.617624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.617639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.618116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.618130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.618613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.618627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.619052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.619066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.619424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.619437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.619840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.619854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.620268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.620282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.620676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.620690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.621104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.621118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.621594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.621608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.622017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.622031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.622437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.622451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.622876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.622889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.623232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.623246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.623723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.623737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.624211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.624226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.624455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.624469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.624926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.624940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.625405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.625419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.625875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.625888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.626227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.576 [2024-07-24 21:52:32.626241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.576 qpair failed and we were unable to recover it. 00:27:24.576 [2024-07-24 21:52:32.626695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.626709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.627036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.627054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.627480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.627496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.627904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.627917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.628336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.628350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.628772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.628785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.629189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.629203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.629603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.629616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.630010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.630023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.630466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.630481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.630937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.630950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.631429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.631444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.631656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.631670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.632142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.632156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.632620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.632634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.633023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.633037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.633523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.633538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.634007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.634021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.634416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.634430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.634909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.634923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.635323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.635338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.635796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.635810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.636264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.636278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.636509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.636523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.637022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.637035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.637385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.637399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.637746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.637759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.638258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.638273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.638740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.638754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.639191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.639205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.639551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.639564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.639995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.640009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.640485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.640500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.640956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.640969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.641385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.577 [2024-07-24 21:52:32.641399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.577 qpair failed and we were unable to recover it. 00:27:24.577 [2024-07-24 21:52:32.641880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.641894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.578 [2024-07-24 21:52:32.642374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.642388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.642791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.642805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.643235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.643249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.643641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.643655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.644088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.644102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.644579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.644593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.644936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.644952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.645406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.645420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.645876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.645890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.646371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.646385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.646772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.646786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.647265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.647278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.647679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.647693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.648180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.648195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.648708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.648722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.649197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.649211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.649613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.649626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.650034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.650053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.650310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.650323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.650712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.650725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.651113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.651127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.651532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.651545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.652026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.652040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.652389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.652402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.652589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.652603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.653066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.653080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.578 [2024-07-24 21:52:32.653557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.578 [2024-07-24 21:52:32.653571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.578 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.654273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.654289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.654745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.654759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.655191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.655206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.655611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.655625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.656080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.656094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.656389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.656402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.656828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.656842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.657275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.657288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.657530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.657543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.657947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.657961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.658458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.658472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.658807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.658820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.659165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.659179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.659603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.659617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.659952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.659965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.660443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.660457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.660864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.660878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.661176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.661191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.661587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.661600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.662074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.662091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.662545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.662559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.663017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.663030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.663487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.663501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.663793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.663807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.664217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.664231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.664707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.664720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.665132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.665146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.665548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.665562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.665774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.665787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.666268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.666282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.666698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.666711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.667111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.667125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.667603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.667617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.668051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.668066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.668540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.668554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.853 qpair failed and we were unable to recover it. 00:27:24.853 [2024-07-24 21:52:32.669038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.853 [2024-07-24 21:52:32.669057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.669300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.669314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.669765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.669779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.670252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.670266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.670557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.670572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.670917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.670931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.671347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.671361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.671815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.671829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.672075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.672088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.672569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.672583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.672989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.673002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.673462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.673477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.673952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.673966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.674378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.674392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.674868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.674883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.675289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.675303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.675701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.675715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.676193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.676207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.676662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.676676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.677152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.677166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.677645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.677659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.678057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.678071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.678548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.678562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.678984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.678999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.679456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.679473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.679956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.679970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.680468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.680483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.680882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.680897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.681240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.681254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.681717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.681731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.682186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.682201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.682677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.682692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.683171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.683186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.683484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.683498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.683900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.683914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.684393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.684407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.684905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.684919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.854 [2024-07-24 21:52:32.685398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.854 [2024-07-24 21:52:32.685413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.854 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.685822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.685837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.686264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.686279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.686676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.686690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.686904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.855 [2024-07-24 21:52:32.687093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.687108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.687520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.687535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.687992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.688006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.688486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.688502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.688959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.688973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.689428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.689444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.689849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.689864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.690272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.690287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.690695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.690709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.691201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.691217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.691675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.691690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.692118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.692133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.692609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.692624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.693028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.693056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.693446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.693460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.693860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.693874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.694299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.694315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.694718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.694734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.695213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.695229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.695657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.695672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.696161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.696179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.696634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.696652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.696996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.697010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.697420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.697435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.697798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.697812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.698231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.698246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.698657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.698671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.699127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.699141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.699625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.699639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.700145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.700160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.700624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.700638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.701038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.701062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.701402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.701417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.701891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.855 [2024-07-24 21:52:32.701905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.855 qpair failed and we were unable to recover it. 00:27:24.855 [2024-07-24 21:52:32.702297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.702311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.702791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.702805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.703205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.703222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.703700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.703714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.704146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.704161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.704615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.704629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.704997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.705011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.705418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.705433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.705887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.705901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.706290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.706306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.706705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.706720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.707012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.707026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.707485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.707500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.707886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.707899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.708357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.708372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.708790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.708805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.709055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.709070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.709481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.709495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.709920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.709934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.710324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.710339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.710761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.710775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.711250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.711265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.711667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.711681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.712029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.712048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.712283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.712297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.712752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.712766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.713102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.713117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.713596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.713610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.713949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.713963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.714369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.714384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.714881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.714895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.715375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.715390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.715795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.715809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.716287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.716303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.716704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.716718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.717171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.717185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.717509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.856 [2024-07-24 21:52:32.717523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.856 qpair failed and we were unable to recover it. 00:27:24.856 [2024-07-24 21:52:32.718010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.718024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.718365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.718380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.718856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.718871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.719307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.719321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.719793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.719808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.720154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.720171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.720650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.720664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.721066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.721081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.721565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.721579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.721980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.721995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.722398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.722412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.722805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.722821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.723178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.723196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.723677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.723699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.724183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.724201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.724611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.724629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.724915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.724930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.725349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.725364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.725763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.725778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.726263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.726279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.726704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.726719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.727109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.727124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.727584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.727600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.728060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.728075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.728567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.728583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.729067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.729083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.729423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.729438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.729836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.729850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.730256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.730271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.730659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.730674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.857 [2024-07-24 21:52:32.731156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.857 [2024-07-24 21:52:32.731172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.857 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.731579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.731595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.732023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.732038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.732497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.732512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.732856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.732869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.733263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.733277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.733759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.733772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.734114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.734128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.734585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.734599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.734810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.734824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.735225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.735239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.735742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.735755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.735992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.736006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.736460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.736474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.736926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.736940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.737418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.737436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.737975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.737988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.738391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.738405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.738819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.738833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.739235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.739249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.739724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.739737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.740145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.740158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.740619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.740633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.741036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.741055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.741512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.741525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.741865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.741878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.742284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.742298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.742703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.742718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.743201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.743215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.743624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.743638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.744033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.744050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.744202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.744215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.744705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.744719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.745119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.745134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.745503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.745516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.745968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.745982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.746390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.746404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.858 [2024-07-24 21:52:32.746801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.858 [2024-07-24 21:52:32.746815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.858 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.747270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.747284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.747635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.747648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.748123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.748137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.748525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.748539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.748956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.748970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.749402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.749416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.749642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.749656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.750145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.750159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.750584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.750598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.751068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.751082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.751556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.751569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.752050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.752064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.752472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.752486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.752940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.752954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.753355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.753369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.753773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.753787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.754206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.754219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.754679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.754697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.755175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.755189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.755579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.755593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.756071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.756086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.756484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.756498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.756971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.756985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.757386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.757400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.757801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.757815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.758194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.758208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.758616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.758631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.759032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.759052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.759458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.759472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.759925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.759938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.760412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.760426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.760837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.760853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.761092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.761108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.761591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.761607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.762006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.859 [2024-07-24 21:52:32.762020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.859 qpair failed and we were unable to recover it. 00:27:24.859 [2024-07-24 21:52:32.762102] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.859 [2024-07-24 21:52:32.762132] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.859 [2024-07-24 21:52:32.762139] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.859 [2024-07-24 21:52:32.762146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.859 [2024-07-24 21:52:32.762151] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.859 [2024-07-24 21:52:32.762265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:24.859 [2024-07-24 21:52:32.762450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.762372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:24.860 [2024-07-24 21:52:32.762464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.762642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:24.860 [2024-07-24 21:52:32.762642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:24.860 [2024-07-24 21:52:32.762966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.762979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.763456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.763470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.763976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.763990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.764414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.764429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.764836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.764850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.765334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.765348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.765805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.765819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.766345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.766360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.766836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.766850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.767258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.767273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.767756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.767770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.768256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.768271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.768723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.768737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.769218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.769233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.769636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.769651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.770105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.770120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.770643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.770657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.771136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.771150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.771607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.771625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.772101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.772115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.772627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.772642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.773052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.773067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.773568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.773583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.774127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.774143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.774549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.774565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.775030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.775049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.775550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.775566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.776047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.776064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.776465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.776481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.776948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.776965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.777388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.777404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.777814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.777829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.778232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.778249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.778722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.778738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.779223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.860 [2024-07-24 21:52:32.779242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.860 qpair failed and we were unable to recover it. 00:27:24.860 [2024-07-24 21:52:32.779698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.779713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.780188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.780205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.780659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.780675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.781150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.781167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.781674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.781690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.782214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.782231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.782593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.782608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.783085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.783101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.783554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.783568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.784022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.784037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.784546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.784560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.785047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.785062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.785544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.785560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.785977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.785991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.786421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.786437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.786834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.786848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.787331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.787349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.787749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.787764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.788194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.788209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.788628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.788642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.789040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.789058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.789537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.789553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.790035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.790053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.790538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.790557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.790962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.790976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.791458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.791473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.791958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.791973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.792428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.792443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.792967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.792982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.793457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.793472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.793977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.793991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.794471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.794487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.794990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.795005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.795450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.795464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.795916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.795930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.796409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.796424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.796880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.861 [2024-07-24 21:52:32.796894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.861 qpair failed and we were unable to recover it. 00:27:24.861 [2024-07-24 21:52:32.797435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.797450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.797937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.797953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.798433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.798449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.798855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.798870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.799271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.799287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.799762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.799776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.800282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.800296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.800818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.800832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.801336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.801349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.801846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.801859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.802311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.802325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.802714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.802727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.803205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.803219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.803707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.803722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.804205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.804219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.804702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.804716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.805198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.805213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.805617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.805631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.806108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.806124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.806626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.806640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.807162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.807178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.807659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.807674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.808131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.808147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.808666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.808680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.809167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.809182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.809638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.809653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.810076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.810094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.810608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.810623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.811137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.811152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.862 [2024-07-24 21:52:32.811606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.862 [2024-07-24 21:52:32.811621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.862 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.812102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.812117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.812572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.812587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.813017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.813034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.813517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.813532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.814011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.814026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.814441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.814456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.814906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.814920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.815399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.815414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.815842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.815856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.816338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.816353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.816764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.816779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.817234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.817249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.817727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.817741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.818224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.818239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.818743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.818757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.819226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.819240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.819725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.819739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.820143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.820157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.820614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.820628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.821156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.821170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.821678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.821691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.822173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.822188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.822668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.822683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.823164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.823179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.823579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.823592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.824064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.824078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.824508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.824522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.824997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.825010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.825519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.825533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.825955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.825969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.826378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.826392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.826860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.826874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.827348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.827362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.827767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.827781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.828260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.828274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.828690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.828704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.863 qpair failed and we were unable to recover it. 00:27:24.863 [2024-07-24 21:52:32.829190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.863 [2024-07-24 21:52:32.829207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.829601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.829615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.830032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.830054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.830453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.830466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.830874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.830888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.831294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.831308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.831714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.831727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.832204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.832229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.832763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.832776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.833279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.833293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.833777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.833791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.834193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.834207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.834678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.834692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.835152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.835166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.835903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.835918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.836399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.836413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.836828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.836841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.837303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.837317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.837717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.837731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.838130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.838144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.838544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.838558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.838947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.838961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.839442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.839456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.839878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.839892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.840296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.840310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.840766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.840779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.841233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.841247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.841703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.841717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.842191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.842205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.842669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.842682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.843118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.843132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.843537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.843551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.844040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.844059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.844480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.844494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.844972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.844986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.845442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.845456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.845956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.864 [2024-07-24 21:52:32.845969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.864 qpair failed and we were unable to recover it. 00:27:24.864 [2024-07-24 21:52:32.846356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.846370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.846828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.846842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.847296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.847311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.847788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.847807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.848228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.848242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.848664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.848678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.849152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.849166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.849561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.849575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.850032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.850049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.850485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.850499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.850996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.851010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.851489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.851504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.851912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.851925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.852316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.852330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.852743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.852757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.853211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.853225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.853630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.853644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.854084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.854098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.854575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.854589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.855092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.855106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.855586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.855600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.855998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.856011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.856495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.856509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.856908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.856921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.857398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.857412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.857914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.857928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.858410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.858423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.858896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.858910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.859430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.859444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.859826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.859840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.860310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.860325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.860826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.860839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.861258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.861272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.861711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.861724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.862154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.862169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.862595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.862609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.863097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.863112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.865 [2024-07-24 21:52:32.863568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.865 [2024-07-24 21:52:32.863581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.865 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.863999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.864013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.864518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.864532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.864955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.864969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.865384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.865398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.865834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.865848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.866325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.866342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.866839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.866853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.867281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.867295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.867747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.867761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.868243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.868257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.868662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.868675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.869146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.869161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.869562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.869576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.870060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.870075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.870483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.870497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.870894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.870907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.871251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.871265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.871717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.871731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.872213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.872227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.872712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.872726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.873229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.873242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.873723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.873737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.874144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.874158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.874640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.874653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.875054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.875069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.875475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.875489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.875964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.875978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.876484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.876497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.876979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.876993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.877416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.877430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.877826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.877839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.878316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.878330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.878819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.878833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.879336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.879350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.879844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.879858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.880286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.880300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.866 [2024-07-24 21:52:32.880802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.866 [2024-07-24 21:52:32.880815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.866 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.881341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.881355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.881771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.881784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.882260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.882274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.882694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.882708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.883182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.883197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.883656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.883670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.884077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.884091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.884572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.884586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.885084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.885101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.885580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.885594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.886090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.886104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.886558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.886572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.887051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.887066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.887551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.887566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.887974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.887989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.888408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.888422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.888940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.888955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.889384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.889399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.889821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.889835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.890337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.890352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.890762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.890776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.891132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.891146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.891557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.891571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.892085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.892100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.892670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.892684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.893125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.893141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.893550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.893564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.893978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.893992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.894468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.894482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.894941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.894956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.895477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.895491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.895960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.867 [2024-07-24 21:52:32.895975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.867 qpair failed and we were unable to recover it. 00:27:24.867 [2024-07-24 21:52:32.896404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.896418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.896665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.896679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.897083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.897097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.897451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.897487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.897909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.897922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.898374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.898384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.898831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.898842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.899259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.899270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.899618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.899628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.900015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.900026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.900341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.900353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.900829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.900840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.901343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.901353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.901823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.901833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.902328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.902338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.902635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.902644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.903100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.903114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.903467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.903478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.903939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.903949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.904388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.904398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.904811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.904821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.905213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.905224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.905671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.905680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.906092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.906102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.906448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.906458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.906867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.906877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.907345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.907356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.907753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.907763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.908230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.908240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.908593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.908603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.909007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.909017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.909517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.909528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.910005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.910015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.910438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.910448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.910869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.910879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.911344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.911354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.868 [2024-07-24 21:52:32.911703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.868 [2024-07-24 21:52:32.911713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.868 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.912091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.912102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.912503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.912513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.912980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.912990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.913388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.913398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.913818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.913828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.914274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.914284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e48000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.914695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.914710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.915193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.915207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.915709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.915723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.916201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.916215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.916572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.916586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.917092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.917107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.917592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.917605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.918081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.918094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.918506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.918520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.918928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.918942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.919347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.919361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.919817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.919830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.920239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.920253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.920672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.920688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.921141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.921155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.921662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.921676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.922201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.922215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.922614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.922627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.923091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.923106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.923560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.923574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.923975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.923989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.924449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.924463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.924938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.924952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.925435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.925449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.925906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.925920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.926322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.926336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.926769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.926782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.927270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.927284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.927765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.927779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.928185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.928199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.928607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.869 [2024-07-24 21:52:32.928621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.869 qpair failed and we were unable to recover it. 00:27:24.869 [2024-07-24 21:52:32.929106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.929120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.929604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.929618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.930069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.930082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.930472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.930486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.930978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.930992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.931472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.931486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.931957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.931971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.932482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.932496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.932936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.932950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.933411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.933425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.933952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.933966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.934390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.934404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.934858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.934872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.935291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.935304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.935784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.935798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.936226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.936245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.936596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.936609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.937008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.937022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.937439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.937452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.937904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.937917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.938417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.938431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.938856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.938870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.939368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.939386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.939906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.939920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.940421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.940436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.940857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.940870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.941278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.941292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.941773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.941786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.942284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.942297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.942780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.942793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.943196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.943210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.943688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.943702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.944111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.944125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.944462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.944476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.944929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.944942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.945421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.945435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.945866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.945880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.870 qpair failed and we were unable to recover it. 00:27:24.870 [2024-07-24 21:52:32.946380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.870 [2024-07-24 21:52:32.946394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.946815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.946829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.947328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.947342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.947822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.947835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.948295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.948309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.948835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.948849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.949327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.949341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.949850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.949863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.950346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.950360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.950866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.950879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.951388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.951402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.951799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.951812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.952211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.952225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.952684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.952697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.953209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.953223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.953674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.953688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.954084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.954098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.954557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.954570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:24.871 [2024-07-24 21:52:32.954996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.871 [2024-07-24 21:52:32.955010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:24.871 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.955414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.955429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.955836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.955852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.956373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.956388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.956898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.956911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.957319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.957334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.957743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.957756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.958102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.958122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.958478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.958492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.958969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.958983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.959479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.959493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.959897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.959911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.960386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.960400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.960809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.960823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.961223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.961237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.961716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.961730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.962231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.962245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.962724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.962738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.963200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.963214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.963645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.963658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.146 [2024-07-24 21:52:32.964114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.146 [2024-07-24 21:52:32.964128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.146 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.964654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.964668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.965149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.965163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.965680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.965693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.966204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.966219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.966637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.966650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.967128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.967142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.967546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.967560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.968058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.968073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.968494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.968507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.968931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.968945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.969457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.969471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.969898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.969911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.970385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.970399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.970864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.970878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.971392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.971406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.971892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.971906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.972361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.972375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.972855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.972869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.973351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.973365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.973846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.973859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.974260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.974274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.974728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.974742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.975157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.975170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.975649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.975662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.976063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.976077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.976559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.976572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.977056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.977070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.977552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.977566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.977964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.977978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.978322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.978336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.978764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.978777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.979236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.979250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.979727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.147 [2024-07-24 21:52:32.979741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.147 qpair failed and we were unable to recover it. 00:27:25.147 [2024-07-24 21:52:32.980175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.980189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.980646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.980660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.981056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.981070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.981532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.981546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.982062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.982076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.982613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.982626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.983086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.983100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.983579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.983593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.984099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.984112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.984554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.984568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.985024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.985038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.985499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.985512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.986014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.986027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.986509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.986523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.987020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.987034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.987513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.987526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.988030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.988047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.988582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.988596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.988996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.989009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.989410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.989424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.989900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.989916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.990273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.990287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.990717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.990731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.991207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.991221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.991584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.991598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.992022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.992037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.992497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.992510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.992963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.992976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.993431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.993445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.993919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.993932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.994292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.994306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.994781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.994795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.995215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.995229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.995629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.148 [2024-07-24 21:52:32.995643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.148 qpair failed and we were unable to recover it. 00:27:25.148 [2024-07-24 21:52:32.996337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:32.996352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:32.996787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:32.996801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:32.997297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:32.997311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:32.997771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:32.997784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:32.998235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:32.998249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:32.998725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:32.998738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:32.999147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:32.999161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:32.999592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:32.999605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.000084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.000098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.000508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.000522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.000975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.000989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.001388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.001403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.001886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.001899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.002328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.002341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.002844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.002858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.003334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.003348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.003838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.003851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.004259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.004273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.004727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.004740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.005216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.005230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.005693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.005706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.006226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.006240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.006733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.006746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.007158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.007173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.007569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.007582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.008058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.008072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.008440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.008457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.008861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.008875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.009300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.009314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.009817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.009831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.010308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.010322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.010725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.010739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.011214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.011228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.011531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.149 [2024-07-24 21:52:33.011545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.149 qpair failed and we were unable to recover it. 00:27:25.149 [2024-07-24 21:52:33.012024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.012037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.012504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.012518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.013046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.013060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.013553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.013567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.013968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.013981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.014472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.014486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.014917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.014932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.015378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.015392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.015892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.015906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.016339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.016353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.016808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.016822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.017303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.017317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.017770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.017784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.018271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.018285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.018771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.018785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.019240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.019254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.019725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.019738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.020208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.020222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.020649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.020662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.021087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.021101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.021548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.021562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.022053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.022067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.022551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.022564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.022959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.022972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.023432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.023446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.023950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.023964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.024484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.024498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.024984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.024997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.025410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.025425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.025858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.025872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.026325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.026340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.026737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.026750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.027233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.027250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.150 [2024-07-24 21:52:33.027650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.150 [2024-07-24 21:52:33.027664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.150 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.028140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.028154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.028607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.028620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.029023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.029036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.029492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.029506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.029987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.030001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.030500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.030513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.030995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.031009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.031494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.031508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.031990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.032004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.032498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.032512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.032994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.033007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.033414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.033429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.033910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.033923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.034353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.034367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.034769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.034782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.035188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.035202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.035676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.035689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.036183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.036196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.036601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.036615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.037070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.037084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.037562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.037576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.037980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.037994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.038482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.038496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.038849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.038863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.039347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.039361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.039842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.039856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.151 [2024-07-24 21:52:33.040356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.151 [2024-07-24 21:52:33.040375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.151 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.040886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.040900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.041306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.041320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.041796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.041809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.042286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.042300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.042761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.042774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.043295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.043309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.043810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.043824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.044321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.044335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.044813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.044827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.045312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.045327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.045724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.045737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.046211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.046227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.046689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.046703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.047173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.047188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.047700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.047714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.048117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.048131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.048605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.048619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.049108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.049121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.049619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.049632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.050033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.050049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.050541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.050555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.050966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.050979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.051431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.051445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.051854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.051868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.052307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.052321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.052796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.052810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.053223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.053237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.053713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.053727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.054078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.054092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.054544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.054558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.054967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.054980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.055366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.055380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.055783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.055797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.056273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.152 [2024-07-24 21:52:33.056287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.152 qpair failed and we were unable to recover it. 00:27:25.152 [2024-07-24 21:52:33.056685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.056699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.057106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.057120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.057596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.057609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.058109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.058123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.058615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.058628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.059106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.059120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.059605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.059619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.060117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.060131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.060610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.060623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.061104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.061118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.061622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.061636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.062162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.062176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.062630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.062644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.063122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.063136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.063594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.063608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.064085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.064099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.064585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.064598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.065030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.065049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.065537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.065551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.066054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.066068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.066587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.066600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.067096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.067110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.067590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.067604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.068060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.068074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.068552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.068566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.069054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.069068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.069473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.069486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.069892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.069906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.070385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.070399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.070899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.070912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.071340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.071355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.071834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.071848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.072190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.072204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.072663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.153 [2024-07-24 21:52:33.072676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.153 qpair failed and we were unable to recover it. 00:27:25.153 [2024-07-24 21:52:33.073201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.073215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.073613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.073627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.074090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.074103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.074620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.074633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.075165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.075180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.075653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.075667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.076173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.076187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.076619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.076633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.077036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.077054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.077452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.077465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.077947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.077961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.078369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.078383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.078874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.078888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.079346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.079360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.079800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.079814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.080230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.080244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.080638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.080652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.081113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.081127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.081606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.081620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.082024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.082037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.082443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.082457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.082934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.082948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.083436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.083450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.083933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.083949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.084430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.084445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.084849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.084862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.085263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.085277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.085733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.085748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.086155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.086169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.086646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.086660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.087165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.087179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.087514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.087527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.087958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.087972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.088414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.154 [2024-07-24 21:52:33.088428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.154 qpair failed and we were unable to recover it. 00:27:25.154 [2024-07-24 21:52:33.088910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.088924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.089409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.089423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.089844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.089858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.090359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.090373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.090850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.090864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.091364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.091378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.091783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.091797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.092272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.092286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.092732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.092745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.093249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.093263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.093738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.093752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.094212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.094226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.094625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.094638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.095117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.095131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.095529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.095543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.096000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.096013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.096542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.096556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.097032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.097048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.097526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.097540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.097985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.097999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.098397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.098411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.098817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.098831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.099218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.099232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.099712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.099725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.100158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.100172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.100575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.100589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.101046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.101060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.101538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.101551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.102059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.102073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.102600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.102616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.103082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.103096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.103600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.103614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.104141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.104156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.104637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.104650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.105107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.105121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.155 [2024-07-24 21:52:33.105598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.155 [2024-07-24 21:52:33.105611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.155 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.106071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.106085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.106515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.106529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.107004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.107018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.107428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.107442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.107917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.107931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.108433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.108447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.108867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.108881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.109233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.109247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.109700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.109714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.110167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.110181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.110661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.110674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.111162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.111176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.111655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.111669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.112153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.112167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.112646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.112660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.113144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.113158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.113614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.113628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.114084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.114098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.114579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.114592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.115018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.115031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.115498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.115512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.115978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.115991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.116412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.116425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.116918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.116932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.117389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.117403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.117805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.117819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.118274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.156 [2024-07-24 21:52:33.118288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.156 qpair failed and we were unable to recover it. 00:27:25.156 [2024-07-24 21:52:33.118778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.118792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.119274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.119288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.119682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.119695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.120156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.120170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.120584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.120597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.120971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.120985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.121489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.121505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.121986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.121999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.122426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.122440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.122840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.122854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.123331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.123345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.123829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.123843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.124348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.124363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.124880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.124894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.125325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.125339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.125820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.125834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.126325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.126339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.126820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.126833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.127333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.127347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.127763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.127777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.128235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.128249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.128668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.128682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.129196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.129210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.129721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.129735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.130230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.130244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.130720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.130734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.131203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.131217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.131733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.131746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.132157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.132172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.132669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.132683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.133112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.133126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.133608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.133622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.157 [2024-07-24 21:52:33.134051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.157 [2024-07-24 21:52:33.134065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.157 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.134553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.134567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.135051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.135065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.135550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.135564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.136061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.136075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.136497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.136511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.136994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.137008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.137429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.137443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.137870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.137883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.138357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.138371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.138861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.138874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.139369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.139383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.139788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.139802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.140258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.140272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.140796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.140812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.141293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.141307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.141759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.141772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.142252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.142266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.142757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.142771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.143251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.143265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.143672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.143686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.144166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.144185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.144592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.144606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.145050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.145063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.145571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.145585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.146018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.146032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.146501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.146516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.146995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.147010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.147512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.147527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.147963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.147977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.148485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.148500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.149031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.149048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.149534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.149548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.149975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.149989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.150407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.158 [2024-07-24 21:52:33.150421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.158 qpair failed and we were unable to recover it. 00:27:25.158 [2024-07-24 21:52:33.150925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.150938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.151439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.151453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.151904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.151917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.152418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.152432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.152866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.152880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.153301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.153315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.153792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.153806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.154293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.154307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.154743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.154756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.155169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.155183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.155660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.155673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.156157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.156171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.156580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.156594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.157114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.157128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.157631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.157645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.158080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.158094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.158501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.158515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.158960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.158973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.159458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.159472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.159881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.159897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.160327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.160341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.160748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.160762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.161255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.161269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.161754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.161768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.162253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.162266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.162668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.162682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.163159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.163173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.163536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.163549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.163953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.163967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.164422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.164436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.164931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.164945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.165342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.165355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.165769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.165782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.159 [2024-07-24 21:52:33.166264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.159 [2024-07-24 21:52:33.166278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.159 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.166684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.166697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.167122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.167136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.167558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.167571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.168023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.168037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.168552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.168566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.169084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.169098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.169497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.169511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.169972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.169985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.170457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.170471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.170992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.171006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.171536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.171550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.171985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.171998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.172482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.172496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.172914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.172927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.173332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.173346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.173801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.173814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.174341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.174355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.174760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.174773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.175226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.175240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.175718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.175732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.176139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.176154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.176553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.176566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.177050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.177064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.177557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.177571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.178000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.178013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.178421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.178438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.178841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.178855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.179264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.179278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.179731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.179745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.180225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.180239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.180721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.180735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.181123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.181137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.181534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.181548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.160 [2024-07-24 21:52:33.182004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.160 [2024-07-24 21:52:33.182018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.160 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.182527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.182542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.182984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.182997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.183356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.183371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.183847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.183860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.184367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.184381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.184924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.184937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.185415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.185429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.185913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.185927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.186430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.186444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.186962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.186976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.187451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.187465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.187969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.187983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.188507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.188521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.189002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.189015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.189499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.189513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.189918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.189932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.190329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.190343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.190758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.190772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.191258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.191273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.191703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.191717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.192196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.192210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.192668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.192681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.193160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.193174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.193664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.193677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.194087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.194101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.194537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.194551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.161 qpair failed and we were unable to recover it. 00:27:25.161 [2024-07-24 21:52:33.194966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.161 [2024-07-24 21:52:33.194980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.195478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.195492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.195973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.195987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.196341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.196355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.196785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.196799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.197276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.197292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.197775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.197789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.198194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.198208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.198653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.198667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.199093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.199106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.199563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.199576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.200054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.200067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.200552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.200566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.200954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.200968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.201373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.201386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.201842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.201856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.202368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.202382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.202879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.202892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.203384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.203398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.203878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.203892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.204375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.204389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.204868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.204882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.205279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.205293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.205711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.205725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.206122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.206136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.206591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.206605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.207053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.207066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.207545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.207559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.208061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.208075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.208551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.208565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.209022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.209036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.209516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.209530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.210014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.210028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.210511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.162 [2024-07-24 21:52:33.210526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.162 qpair failed and we were unable to recover it. 00:27:25.162 [2024-07-24 21:52:33.210954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.210968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.211394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.211407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.211811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.211825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.212253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.212268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.212712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.212726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.213209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.213223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.213718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.213732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.214223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.214237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.214689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.214702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.215156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.215170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.215647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.215661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.216147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.216161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.216599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.216613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.217032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.217055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.217479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.217493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.217948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.217962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.218442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.218456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.218881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.218895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.219398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.219412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.219937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.219951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.220433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.220447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.220873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.220887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.221290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.221304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.221709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.221722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.222199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.222213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.222699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.222713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.223188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.223202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.223648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.223661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.224200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.224214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.224622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.224635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.225108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.225122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.225527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.225541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.226015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.226028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.226456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.226470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.163 [2024-07-24 21:52:33.226924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.163 [2024-07-24 21:52:33.226937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.163 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.227456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.227470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.227964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.227977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.228480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.228494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.229006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.229022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.229496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.229511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.229915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.229929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.230325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.230339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.230798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.230811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.231327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.231341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.231784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.231798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.232288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.232302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.232782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.232796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.233253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.233267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.233790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.233804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.234235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.234249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.234726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.234740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.235240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.235254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.235773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.235787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.236183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.236197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.236655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.236669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.237193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.237207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.237637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.237651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.237998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.238011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.238486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.238500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.239002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.239015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.239498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.239512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.240011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.240025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.240554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.240568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.241067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.241081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.241511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.241524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.241953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.241967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.242364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.242378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.164 qpair failed and we were unable to recover it. 00:27:25.164 [2024-07-24 21:52:33.242865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.164 [2024-07-24 21:52:33.242879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.165 qpair failed and we were unable to recover it. 00:27:25.165 [2024-07-24 21:52:33.243384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.165 [2024-07-24 21:52:33.243398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.165 qpair failed and we were unable to recover it. 00:27:25.165 [2024-07-24 21:52:33.243890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.165 [2024-07-24 21:52:33.243904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.165 qpair failed and we were unable to recover it. 00:27:25.165 [2024-07-24 21:52:33.244337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.165 [2024-07-24 21:52:33.244351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.165 qpair failed and we were unable to recover it. 00:27:25.165 [2024-07-24 21:52:33.244701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.165 [2024-07-24 21:52:33.244716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.165 qpair failed and we were unable to recover it. 00:27:25.165 [2024-07-24 21:52:33.245156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.165 [2024-07-24 21:52:33.245170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.165 qpair failed and we were unable to recover it. 00:27:25.165 [2024-07-24 21:52:33.245541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.165 [2024-07-24 21:52:33.245554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.165 qpair failed and we were unable to recover it. 00:27:25.165 [2024-07-24 21:52:33.246039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.165 [2024-07-24 21:52:33.246057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.165 qpair failed and we were unable to recover it. 00:27:25.165 [2024-07-24 21:52:33.246520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.165 [2024-07-24 21:52:33.246535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.165 qpair failed and we were unable to recover it. 00:27:25.165 [2024-07-24 21:52:33.247059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.165 [2024-07-24 21:52:33.247074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.165 qpair failed and we were unable to recover it. 00:27:25.165 [2024-07-24 21:52:33.247556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.165 [2024-07-24 21:52:33.247570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.165 qpair failed and we were unable to recover it. 00:27:25.165 [2024-07-24 21:52:33.248039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.165 [2024-07-24 21:52:33.248063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.165 qpair failed and we were unable to recover it. 00:27:25.165 [2024-07-24 21:52:33.248456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.165 [2024-07-24 21:52:33.248469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.165 qpair failed and we were unable to recover it. 00:27:25.165 [2024-07-24 21:52:33.248922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.165 [2024-07-24 21:52:33.248936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.165 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.249371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.249389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.249745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.249760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.250157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.250172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.250593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.250607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.251095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.251110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.251588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.251602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.252121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.252135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.252579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.252593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.253100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.253114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.253594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.253608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.254036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.254053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.254547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.254561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.255041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.255058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.255469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.255482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.255917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.255931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.256335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.256349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.256828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.256842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.257258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.257273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.257750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.431 [2024-07-24 21:52:33.257764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.431 qpair failed and we were unable to recover it. 00:27:25.431 [2024-07-24 21:52:33.258264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.258279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.258685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.258699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.259051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.259066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.259545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.259558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.260050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.260064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.260512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.260527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.261007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.261021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.261443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.261458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.261864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.261877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.262274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.262288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.262745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.262759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.263262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.263277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.263705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.263718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.264194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.264208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.264593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.264607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.265074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.265089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.265523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.265537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.265959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.265973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.266391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.266408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.266761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.266775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.267176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.267191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.267675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.267689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.268155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.268170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.268693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.268707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.269192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.269206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.269705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.269719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.270218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.270233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.270746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.270760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.271216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.271230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.271670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.271683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.272135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.272150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.272556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.272570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.272979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.272993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.273402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.273417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.273824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.273838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.274260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.274274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.274768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.274782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.275186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.275201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.275565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.275579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.276056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.276071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.276476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.276490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.276892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.276906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.277241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.277256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.277657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.277672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.278086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.278101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.278513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.278528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.278930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.278943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.279183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.279197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.279648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.279662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.280131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.280145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.280493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.280507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.280982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.280995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.281396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.281410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.281812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.281825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.282227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.282241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.282649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.282663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.283003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.283017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.283433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.283447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.283862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.283878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.284247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.284262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.284699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.284713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.285102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.285116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.285544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.285559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.286033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.286051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.286530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.286545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.286945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.286959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.287416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.432 [2024-07-24 21:52:33.287430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.432 qpair failed and we were unable to recover it. 00:27:25.432 [2024-07-24 21:52:33.287908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.287922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.288278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.288292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.288747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.288761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.289158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.289172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.289647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.289661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.289905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.289919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.290343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.290358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.290694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.290707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.291102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.291116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.291593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.291608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.292084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.292098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.292508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.292522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.292869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.292883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.293382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.293396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.293879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.293893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.294312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.294326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.294775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.294789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.295198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.295212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.295644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.295658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.296051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.296065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.296469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.296484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.296947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.296961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.297416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.297430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.297884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.297898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.298315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.298329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.298781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.298795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.299225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.299240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.299693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.299706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.300162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.300176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.300650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.300664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.301140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.301154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.301583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.301599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.302091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.302105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.302450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.302464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.302883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.302896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.303349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.303363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.303817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.303831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.304244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.304258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.304662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.304675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.305128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.305142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.305565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.305579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.306006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.306020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.306502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.306516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.306919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.306932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.307342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.307356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.307761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.307776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.308179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.308194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.308597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.308610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.309083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.309098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.309499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.309513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.309804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.309818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.310169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.310184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.310613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.310627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.310977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.310990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.311213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.311228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.311629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.311643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.312111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.312126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.312579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.312593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.313049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.313064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.313419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.313432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.313907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.313920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.314265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.314280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.314736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.314750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.315110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.315124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.315335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.315348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.315737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.315750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.315985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.315999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.316333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.316347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.433 [2024-07-24 21:52:33.316686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.433 [2024-07-24 21:52:33.316700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.433 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.317040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.317136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.317601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.317615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.318004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.318020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.318479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.318493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.318970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.318985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.319439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.319453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.319885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.319900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.320364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.320379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.320816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.320829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.321284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.321298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.321647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.321661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.322117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.322131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.322604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.322617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.323095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.323110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.323456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.323470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.323681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.323695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.324173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.324187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.324608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.324621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.325101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.325115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.325593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.325606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.326080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.326094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.326478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.326492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.326988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.327001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.327405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.327419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.327843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.327856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.328264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.328278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.328679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.328693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.329168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.329182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.329663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.329676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.330198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.330212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.330572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.330585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.331064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.331079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.331487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.331501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.331903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.331917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.332317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.332331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.332810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.332823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.333301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.333315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.333789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.333803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.334178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.334193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.334532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.334545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.334953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.334967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.335438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.335452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.335905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.335922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.336330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.336344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.336744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.336758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.337144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.337158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.337506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.337520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.337923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.337937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.338335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.338349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.338828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.338842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.339244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.339259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.339653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.339667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.340147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.340162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.340615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.340629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.341136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.341150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.341364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.341378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.341877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.341891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.342371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.342385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.342782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.434 [2024-07-24 21:52:33.342796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.434 qpair failed and we were unable to recover it. 00:27:25.434 [2024-07-24 21:52:33.343150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.343165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.343525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.343539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.344016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.344031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.344454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.344469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.344923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.344937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.345343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.345357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.345756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.345770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.346124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.346138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.346593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.346608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.347000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.347014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.347471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.347505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.347995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.348011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.348424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.348440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.348899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.348914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.349392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.349407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.349816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.349831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.350303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.350319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.350774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.350788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.351270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.351285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.351788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.351801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.352207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.352223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.352681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.352696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.353152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.353166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.353641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.353655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.354081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.354096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.354440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.354454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.354883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.354899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.355249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.355264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.355667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.355682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.356153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.356169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.356624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.356639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.357049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.357064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.357413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.357427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.357881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.357895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.358347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.358362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.358766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.358780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.359201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.359216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.359694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.359711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.360117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.360131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.360585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.360599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.361053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.361067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.361541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.361556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.361960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.361974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.362326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.362340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.362832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.362846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.363256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.363271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.363751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.363766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.364242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.364257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.364652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.364666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.365140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.365154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.365605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.365619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.366061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.366076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.366478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.366492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.366897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.366911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.367390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.367404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.367788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.367802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.368255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.368270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.368675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.368689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.369084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.369098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.369570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.369584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.370011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.370026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.370418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.370433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.370907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.370921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.371376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.371390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.371847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.371863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.372265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.372279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.372756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.372770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.373249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.373264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.373747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.435 [2024-07-24 21:52:33.373762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.435 qpair failed and we were unable to recover it. 00:27:25.435 [2024-07-24 21:52:33.374246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.374260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.374713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.374728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.375240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.375255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.375751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.375765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.376171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.376186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.376582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.376596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.377073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.377088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.377526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.377540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.377968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.377982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.378436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.378455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.378880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.378895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.379355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.379370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.379846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.379861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.380363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.380377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.380849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.380864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.381344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.381360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.381768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.381783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.382251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.382266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.382722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.382736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.383168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.383183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.383614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.383627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.383961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.383976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.384380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.384398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.384870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.384885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.385286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.385302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.385782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.385796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.386271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.386285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.386717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.386732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.387121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.387136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.387522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.387536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.387892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.387906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.388120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.388134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.388489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.388503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.388982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.388996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.389423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.389438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.389927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.389942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.390348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.390364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.390821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.390836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.391229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.391244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.391667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.391681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.392104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.392118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.392462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.392476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.392881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.392895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.393298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.393313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.393794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.393808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.394288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.394303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.394708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.394723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.395108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.395124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.395601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.395616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.396051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.396068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.396527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.396541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.396934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.396948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.397425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.397439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.397868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.397883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.398290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.398305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.398713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.398727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.399069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.399084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.399564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.399578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.400054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.400068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.400465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.400481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.400935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.400949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.401405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.401419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.436 qpair failed and we were unable to recover it. 00:27:25.436 [2024-07-24 21:52:33.401872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.436 [2024-07-24 21:52:33.401887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.402354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.402369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.402766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.402780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.403120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.403135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.403592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.403606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.404066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.404081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.404537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.404552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.404969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.404983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.405457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.405472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.405891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.405906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.406358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.406372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.406773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.406787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.407263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.407277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.407914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.407929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.408338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.408358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.408791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.408805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.409220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.409234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.409687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.409702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.410158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.410173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.410581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.410594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.410991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.411005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.411479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.411492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.411960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.411974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.412442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.412456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.412915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.412928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.413430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.413444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.413902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.413916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.414342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.414356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.414714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.414728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.415200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.415214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.415642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.415655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.416307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.416321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.416824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.416837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.417292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.417306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.417715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.417728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.418158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.418172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.418654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.418668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.419150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.419165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.419450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.419464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.419965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.419978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.420460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.420474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.420904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.420918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.421324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.421339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.421746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.421760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.422256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.422270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.422753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.422766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.423266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.423280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.423700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.423713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.424192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.424208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.424634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.424647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.425053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.425067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.425466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.425480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.425840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.425853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.426315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.426330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.426680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.426694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.427152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.427167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.427716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.427730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.428221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.428235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.428717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.428731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.429224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.429238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.429712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.429726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.430128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.430144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.430542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.430557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.431017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.431031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.431512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.431527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.431993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.432007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.432418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.432432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.432828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.432842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.433320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.433334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.437 [2024-07-24 21:52:33.433726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.437 [2024-07-24 21:52:33.433739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.437 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.434210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.434225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.434680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.434694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.435148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.435163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.435499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.435513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.435973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.435987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:25.438 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:25.438 [2024-07-24 21:52:33.436483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.436498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:25.438 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:25.438 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.438 [2024-07-24 21:52:33.436960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.436974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.437427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.437442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.437831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.437844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.438255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.438270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.438621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.438638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.439092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.439106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.439514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.439528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.439929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.439943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.440398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.440415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.440817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.440830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.441297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.441313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.441715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.441729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.442207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.442222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.442627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.442642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.443066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.443080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.443475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.443489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.443898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.443913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.444339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.444354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.444706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.444720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.445130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.445144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.445820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.445834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.446251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.446266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.446720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.446733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.447212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.447227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.447657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.447672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.448094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.448108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.448530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.448546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.448975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.448989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.449400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.449415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.449840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.449854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.450261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.450277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.450753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.450772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.451227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.451241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.451647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.451661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.452153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.452168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.452565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.452579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.452935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.452949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.453412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.453427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.453880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.453894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.454324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.454338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.454748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.454763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.455372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.455390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.455799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.455814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.456250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.456264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.456614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.456629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa73f30 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.457335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.457366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.457735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.457752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.458185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.458201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.458595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.458610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.459035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.459053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.459520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.459534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.460032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.460051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.460395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.460409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.461039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.461057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.461381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.461396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.461814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.461829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.462521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.462538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.463009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.463023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.463430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.463448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.463806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.463821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.464174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.438 [2024-07-24 21:52:33.464189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.438 qpair failed and we were unable to recover it. 00:27:25.438 [2024-07-24 21:52:33.464556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.464570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.465058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.465072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.465478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.465492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.466055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.466071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.466432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.466447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.466904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.466918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.467324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.467339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.468031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.468050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.468419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.468434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.468905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.468919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.469368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.469383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.469794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.469808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.470304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.470318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.470729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.470743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.471221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.471236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.471587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.471601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.472057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.472071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.472426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.472440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.472897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.472912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.439 [2024-07-24 21:52:33.473343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.473359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:25.439 [2024-07-24 21:52:33.473722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.473739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.439 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.439 [2024-07-24 21:52:33.474311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.474327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.474681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.474695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.475111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.475126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.475479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.475493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.475850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.475864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.476307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.476321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.476674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.476688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.477046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.477060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.477466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.477480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.477844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.477858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.478234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.478249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.478619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.478634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.479058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.479072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.479438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.479452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.479798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.479814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.480303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.480318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.480725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.480739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.481166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.481181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.481533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.481547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.482019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.482033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.482379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.482393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.482829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.482844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.483276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.483291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.483700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.483715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.484132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.484147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.484512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.484526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.484901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.484914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.485341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.485356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.485727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.485743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.486245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.486262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.486697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.486712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.487194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.487210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.487557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.487573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.487937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.487954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.488362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.488380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.488835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.488851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.489337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.489356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.489810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.489828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.490274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.490293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.490649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.439 [2024-07-24 21:52:33.490665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.439 qpair failed and we were unable to recover it. 00:27:25.439 [2024-07-24 21:52:33.491148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.491163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.491522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.491537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.492064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.492079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.492458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.492472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.492900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.492914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 Malloc0 00:27:25.440 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.440 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:25.440 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.440 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.440 [2024-07-24 21:52:33.494097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.494126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.494511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.494527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.494939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.494954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.495316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.495331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.495742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.495756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.496106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.496120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.496477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.496491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.496820] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.440 [2024-07-24 21:52:33.496905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.496922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.497379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.497393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.497668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.497682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.498024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.498038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.498397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.498411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.498809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.498823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.499185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.499199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.499556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.499570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.500001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.500015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.500358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.500372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.500542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.500556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.500906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.500920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.501280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.501294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.501702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.501715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.502197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.502212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.502658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.502672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.503081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.503095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.503495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.503509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.503863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.503877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.504279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.504299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.504646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.504660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.504832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.504846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.505257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.505272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.440 [2024-07-24 21:52:33.505674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.505688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:25.440 [2024-07-24 21:52:33.506166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.506180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.440 [2024-07-24 21:52:33.506528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.440 [2024-07-24 21:52:33.506543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.506946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.506960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.507382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.507396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.507736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.507750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.508207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.508222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.508699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.508712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.509066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.509080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.509482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.509497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.509662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.509676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.510086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.510100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.510500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.510514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.510918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.510932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.511334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.511348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.511708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.511725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.512182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.512196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.512410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.512423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.512847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.512861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.513263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.513277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.513687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.513701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.514053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.514067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.514410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.514423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.514821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.514834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.515171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.515185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.515595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.515609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.515952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.515966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.516401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.516414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.440 qpair failed and we were unable to recover it. 00:27:25.440 [2024-07-24 21:52:33.516819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.440 [2024-07-24 21:52:33.516832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.517252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.517266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.441 [2024-07-24 21:52:33.517692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.517706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:25.441 [2024-07-24 21:52:33.517948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.517962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.441 [2024-07-24 21:52:33.518329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.518343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.441 [2024-07-24 21:52:33.518673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.518687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.519088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.519102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.519535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.519549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.519887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.519900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.520356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.520370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.520713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.520727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.521197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.521211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.521619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.521635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.522037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.522064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.522424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.522437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.522842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.522857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.523119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.523132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.523542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.523555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.523946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.523959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.524365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.524379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.524799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.524813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.525298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.525312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.441 [2024-07-24 21:52:33.525719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.525734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:25.441 [2024-07-24 21:52:33.526217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.526232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.441 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.441 [2024-07-24 21:52:33.526692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.526706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.527106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.527121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.527478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.527492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.527950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.527964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.528396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.528410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.528779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.528792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 [2024-07-24 21:52:33.529063] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.441 [2024-07-24 21:52:33.529143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.441 [2024-07-24 21:52:33.529157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4e50000b90 with addr=10.0.0.2, port=4420 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.441 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.441 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:25.441 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.441 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.441 [2024-07-24 21:52:33.537513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.441 [2024-07-24 21:52:33.537713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.441 [2024-07-24 21:52:33.537740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.441 [2024-07-24 21:52:33.537751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.441 [2024-07-24 21:52:33.537760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.441 [2024-07-24 21:52:33.537788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.441 qpair failed and we were unable to recover it. 00:27:25.703 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.703 21:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3214803 00:27:25.703 [2024-07-24 21:52:33.547403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.547550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.547570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.547577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.547583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.547600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.557407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.557543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.557562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.557570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.557575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.557593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.567365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.567510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.567529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.567536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.567542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.567559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.577424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.577563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.577581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.577589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.577595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.577611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.587429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.587561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.587579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.587592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.587598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.587615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.597514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.597653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.597672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.597679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.597686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.597703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.607554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.607692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.607710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.607718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.607724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.607741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.617543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.617710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.617727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.617734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.617740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.617757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.627546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.627680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.627698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.627706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.627712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.627729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.637614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.637755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.637773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.637780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.637786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.637802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.647640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.647776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.647794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.647802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.647807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.647824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.657677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.657811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.657829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.657836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.657843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.657860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.667681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.667812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.667830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.667837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.667843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.667860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.677797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.677936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.677957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.677964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.677970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.677987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.687680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.687814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.687831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.687838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.687844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.687861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.697814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.697961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.697978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.697985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.703 [2024-07-24 21:52:33.697991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.703 [2024-07-24 21:52:33.698008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.703 qpair failed and we were unable to recover it. 00:27:25.703 [2024-07-24 21:52:33.707752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.703 [2024-07-24 21:52:33.707890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.703 [2024-07-24 21:52:33.707908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.703 [2024-07-24 21:52:33.707915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.704 [2024-07-24 21:52:33.707920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.704 [2024-07-24 21:52:33.707937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.704 qpair failed and we were unable to recover it. 00:27:25.704 [2024-07-24 21:52:33.717765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.704 [2024-07-24 21:52:33.717898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.704 [2024-07-24 21:52:33.717916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.704 [2024-07-24 21:52:33.717923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.704 [2024-07-24 21:52:33.717929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.704 [2024-07-24 21:52:33.717956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.704 qpair failed and we were unable to recover it. 00:27:25.704 [2024-07-24 21:52:33.727880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.704 [2024-07-24 21:52:33.728033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.704 [2024-07-24 21:52:33.728057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.704 [2024-07-24 21:52:33.728064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.704 [2024-07-24 21:52:33.728071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.704 [2024-07-24 21:52:33.728088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.704 qpair failed and we were unable to recover it. 00:27:25.704 [2024-07-24 21:52:33.737885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.704 [2024-07-24 21:52:33.738022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.704 [2024-07-24 21:52:33.738041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.704 [2024-07-24 21:52:33.738055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.704 [2024-07-24 21:52:33.738061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.704 [2024-07-24 21:52:33.738078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.704 qpair failed and we were unable to recover it. 00:27:25.704 [2024-07-24 21:52:33.747959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.704 [2024-07-24 21:52:33.748138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.704 [2024-07-24 21:52:33.748156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.704 [2024-07-24 21:52:33.748163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.704 [2024-07-24 21:52:33.748169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.704 [2024-07-24 21:52:33.748187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.704 qpair failed and we were unable to recover it. 00:27:25.704 [2024-07-24 21:52:33.757959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.704 [2024-07-24 21:52:33.758103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.704 [2024-07-24 21:52:33.758121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.704 [2024-07-24 21:52:33.758128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.704 [2024-07-24 21:52:33.758134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.704 [2024-07-24 21:52:33.758152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.704 qpair failed and we were unable to recover it. 00:27:25.704 [2024-07-24 21:52:33.768240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.704 [2024-07-24 21:52:33.768390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.704 [2024-07-24 21:52:33.768411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.704 [2024-07-24 21:52:33.768418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.704 [2024-07-24 21:52:33.768424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.704 [2024-07-24 21:52:33.768441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.704 qpair failed and we were unable to recover it. 00:27:25.704 [2024-07-24 21:52:33.778067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.704 [2024-07-24 21:52:33.778205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.704 [2024-07-24 21:52:33.778223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.704 [2024-07-24 21:52:33.778230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.704 [2024-07-24 21:52:33.778236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.704 [2024-07-24 21:52:33.778252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.704 qpair failed and we were unable to recover it. 00:27:25.704 [2024-07-24 21:52:33.788019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.704 [2024-07-24 21:52:33.788163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.704 [2024-07-24 21:52:33.788181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.704 [2024-07-24 21:52:33.788188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.704 [2024-07-24 21:52:33.788193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.704 [2024-07-24 21:52:33.788210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.704 qpair failed and we were unable to recover it. 00:27:25.704 [2024-07-24 21:52:33.798093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.704 [2024-07-24 21:52:33.798272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.704 [2024-07-24 21:52:33.798290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.704 [2024-07-24 21:52:33.798297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.704 [2024-07-24 21:52:33.798303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.704 [2024-07-24 21:52:33.798320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.704 qpair failed and we were unable to recover it. 00:27:25.704 [2024-07-24 21:52:33.808030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.704 [2024-07-24 21:52:33.808173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.704 [2024-07-24 21:52:33.808192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.704 [2024-07-24 21:52:33.808200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.704 [2024-07-24 21:52:33.808210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.704 [2024-07-24 21:52:33.808228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.704 qpair failed and we were unable to recover it. 00:27:25.965 [2024-07-24 21:52:33.818090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.965 [2024-07-24 21:52:33.818231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.965 [2024-07-24 21:52:33.818249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.965 [2024-07-24 21:52:33.818256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.965 [2024-07-24 21:52:33.818263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.965 [2024-07-24 21:52:33.818280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.965 qpair failed and we were unable to recover it. 00:27:25.965 [2024-07-24 21:52:33.828137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.965 [2024-07-24 21:52:33.828321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.965 [2024-07-24 21:52:33.828339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.965 [2024-07-24 21:52:33.828346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.965 [2024-07-24 21:52:33.828351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.965 [2024-07-24 21:52:33.828368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.965 qpair failed and we were unable to recover it. 00:27:25.965 [2024-07-24 21:52:33.838186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.965 [2024-07-24 21:52:33.838322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.965 [2024-07-24 21:52:33.838340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.965 [2024-07-24 21:52:33.838347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.965 [2024-07-24 21:52:33.838353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.965 [2024-07-24 21:52:33.838370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.965 qpair failed and we were unable to recover it. 00:27:25.965 [2024-07-24 21:52:33.848254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.965 [2024-07-24 21:52:33.848389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.965 [2024-07-24 21:52:33.848407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.965 [2024-07-24 21:52:33.848414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.965 [2024-07-24 21:52:33.848420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.965 [2024-07-24 21:52:33.848437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.965 qpair failed and we were unable to recover it. 00:27:25.965 [2024-07-24 21:52:33.858244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.965 [2024-07-24 21:52:33.858408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.966 [2024-07-24 21:52:33.858427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.966 [2024-07-24 21:52:33.858434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.966 [2024-07-24 21:52:33.858440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.966 [2024-07-24 21:52:33.858456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-24 21:52:33.868194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.966 [2024-07-24 21:52:33.868340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.966 [2024-07-24 21:52:33.868358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.966 [2024-07-24 21:52:33.868365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.966 [2024-07-24 21:52:33.868371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.966 [2024-07-24 21:52:33.868388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-24 21:52:33.878289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.966 [2024-07-24 21:52:33.878422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.966 [2024-07-24 21:52:33.878440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.966 [2024-07-24 21:52:33.878447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.966 [2024-07-24 21:52:33.878453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.966 [2024-07-24 21:52:33.878469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-24 21:52:33.888336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.966 [2024-07-24 21:52:33.888473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.966 [2024-07-24 21:52:33.888491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.966 [2024-07-24 21:52:33.888498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.966 [2024-07-24 21:52:33.888503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.966 [2024-07-24 21:52:33.888519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-24 21:52:33.898360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.966 [2024-07-24 21:52:33.898501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.966 [2024-07-24 21:52:33.898519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.966 [2024-07-24 21:52:33.898526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.966 [2024-07-24 21:52:33.898535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.966 [2024-07-24 21:52:33.898552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-24 21:52:33.908311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.966 [2024-07-24 21:52:33.908485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.966 [2024-07-24 21:52:33.908503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.966 [2024-07-24 21:52:33.908510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.966 [2024-07-24 21:52:33.908516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.966 [2024-07-24 21:52:33.908532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-24 21:52:33.918414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.966 [2024-07-24 21:52:33.918556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.966 [2024-07-24 21:52:33.918575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.966 [2024-07-24 21:52:33.918582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.966 [2024-07-24 21:52:33.918589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.966 [2024-07-24 21:52:33.918605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-24 21:52:33.928437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.966 [2024-07-24 21:52:33.928573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.966 [2024-07-24 21:52:33.928590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.966 [2024-07-24 21:52:33.928597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.966 [2024-07-24 21:52:33.928603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.966 [2024-07-24 21:52:33.928620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-24 21:52:33.938462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.966 [2024-07-24 21:52:33.938598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.966 [2024-07-24 21:52:33.938616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.966 [2024-07-24 21:52:33.938623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.966 [2024-07-24 21:52:33.938629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:25.966 [2024-07-24 21:52:33.938646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-24 21:52:33.948523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.966 [2024-07-24 21:52:33.948706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.966 [2024-07-24 21:52:33.948737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.966 [2024-07-24 21:52:33.948749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.966 [2024-07-24 21:52:33.948758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:25.966 [2024-07-24 21:52:33.948783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-24 21:52:33.958528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.966 [2024-07-24 21:52:33.958670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.966 [2024-07-24 21:52:33.958689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.966 [2024-07-24 21:52:33.958696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.966 [2024-07-24 21:52:33.958702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:25.966 [2024-07-24 21:52:33.958719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-24 21:52:33.968559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.966 [2024-07-24 21:52:33.968697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.966 [2024-07-24 21:52:33.968716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.966 [2024-07-24 21:52:33.968723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.966 [2024-07-24 21:52:33.968730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:25.966 [2024-07-24 21:52:33.968747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-24 21:52:33.978610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.966 [2024-07-24 21:52:33.978766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.966 [2024-07-24 21:52:33.978785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.966 [2024-07-24 21:52:33.978793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.966 [2024-07-24 21:52:33.978799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:25.966 [2024-07-24 21:52:33.978815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-24 21:52:33.988601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.966 [2024-07-24 21:52:33.988739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.966 [2024-07-24 21:52:33.988758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.966 [2024-07-24 21:52:33.988769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.967 [2024-07-24 21:52:33.988775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:25.967 [2024-07-24 21:52:33.988792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-24 21:52:33.998815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.967 [2024-07-24 21:52:33.998950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.967 [2024-07-24 21:52:33.998969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.967 [2024-07-24 21:52:33.998976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.967 [2024-07-24 21:52:33.998982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:25.967 [2024-07-24 21:52:33.998998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-24 21:52:34.008659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.967 [2024-07-24 21:52:34.008799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.967 [2024-07-24 21:52:34.008818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.967 [2024-07-24 21:52:34.008825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.967 [2024-07-24 21:52:34.008831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:25.967 [2024-07-24 21:52:34.008847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-24 21:52:34.018691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.967 [2024-07-24 21:52:34.018843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.967 [2024-07-24 21:52:34.018861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.967 [2024-07-24 21:52:34.018868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.967 [2024-07-24 21:52:34.018874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:25.967 [2024-07-24 21:52:34.018890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-24 21:52:34.028744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.967 [2024-07-24 21:52:34.028892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.967 [2024-07-24 21:52:34.028911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.967 [2024-07-24 21:52:34.028918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.967 [2024-07-24 21:52:34.028924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:25.967 [2024-07-24 21:52:34.028941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-24 21:52:34.038747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.967 [2024-07-24 21:52:34.038877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.967 [2024-07-24 21:52:34.038896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.967 [2024-07-24 21:52:34.038903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.967 [2024-07-24 21:52:34.038909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:25.967 [2024-07-24 21:52:34.038925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-24 21:52:34.048767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.967 [2024-07-24 21:52:34.048907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.967 [2024-07-24 21:52:34.048926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.967 [2024-07-24 21:52:34.048932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.967 [2024-07-24 21:52:34.048938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:25.967 [2024-07-24 21:52:34.048955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-24 21:52:34.058801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.967 [2024-07-24 21:52:34.058942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.967 [2024-07-24 21:52:34.058960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.967 [2024-07-24 21:52:34.058967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.967 [2024-07-24 21:52:34.058973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:25.967 [2024-07-24 21:52:34.058990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-24 21:52:34.068884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.967 [2024-07-24 21:52:34.069040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.967 [2024-07-24 21:52:34.069063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.967 [2024-07-24 21:52:34.069071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.967 [2024-07-24 21:52:34.069077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:25.967 [2024-07-24 21:52:34.069093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-24 21:52:34.078879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.967 [2024-07-24 21:52:34.079011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.967 [2024-07-24 21:52:34.079030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.967 [2024-07-24 21:52:34.079041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.967 [2024-07-24 21:52:34.079053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:25.967 [2024-07-24 21:52:34.079070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:25.967 qpair failed and we were unable to recover it. 00:27:26.228 [2024-07-24 21:52:34.089117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.228 [2024-07-24 21:52:34.089254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.228 [2024-07-24 21:52:34.089273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.228 [2024-07-24 21:52:34.089280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.228 [2024-07-24 21:52:34.089286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.228 [2024-07-24 21:52:34.089302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.228 qpair failed and we were unable to recover it. 00:27:26.228 [2024-07-24 21:52:34.098873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.228 [2024-07-24 21:52:34.099018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.228 [2024-07-24 21:52:34.099037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.228 [2024-07-24 21:52:34.099049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.228 [2024-07-24 21:52:34.099055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.228 [2024-07-24 21:52:34.099072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.228 qpair failed and we were unable to recover it. 00:27:26.228 [2024-07-24 21:52:34.108941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.228 [2024-07-24 21:52:34.109119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.228 [2024-07-24 21:52:34.109138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.228 [2024-07-24 21:52:34.109145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.228 [2024-07-24 21:52:34.109151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.228 [2024-07-24 21:52:34.109168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.228 qpair failed and we were unable to recover it. 00:27:26.228 [2024-07-24 21:52:34.118991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.228 [2024-07-24 21:52:34.119132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.228 [2024-07-24 21:52:34.119150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.228 [2024-07-24 21:52:34.119157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.228 [2024-07-24 21:52:34.119163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.228 [2024-07-24 21:52:34.119180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.228 qpair failed and we were unable to recover it. 00:27:26.228 [2024-07-24 21:52:34.128946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.228 [2024-07-24 21:52:34.129090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.228 [2024-07-24 21:52:34.129109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.228 [2024-07-24 21:52:34.129116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.228 [2024-07-24 21:52:34.129123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.228 [2024-07-24 21:52:34.129139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.228 qpair failed and we were unable to recover it. 00:27:26.228 [2024-07-24 21:52:34.139055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.228 [2024-07-24 21:52:34.139195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.228 [2024-07-24 21:52:34.139214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.228 [2024-07-24 21:52:34.139221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.228 [2024-07-24 21:52:34.139227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.228 [2024-07-24 21:52:34.139243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.228 qpair failed and we were unable to recover it. 00:27:26.228 [2024-07-24 21:52:34.149084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.228 [2024-07-24 21:52:34.149220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.228 [2024-07-24 21:52:34.149239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.228 [2024-07-24 21:52:34.149246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.228 [2024-07-24 21:52:34.149252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.228 [2024-07-24 21:52:34.149268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.229 qpair failed and we were unable to recover it. 00:27:26.229 [2024-07-24 21:52:34.159115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.229 [2024-07-24 21:52:34.159251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.229 [2024-07-24 21:52:34.159275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.229 [2024-07-24 21:52:34.159283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.229 [2024-07-24 21:52:34.159289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.229 [2024-07-24 21:52:34.159305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.229 qpair failed and we were unable to recover it. 00:27:26.229 [2024-07-24 21:52:34.169157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.229 [2024-07-24 21:52:34.169482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.229 [2024-07-24 21:52:34.169500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.229 [2024-07-24 21:52:34.169510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.229 [2024-07-24 21:52:34.169516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.229 [2024-07-24 21:52:34.169532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.229 qpair failed and we were unable to recover it. 00:27:26.229 [2024-07-24 21:52:34.179179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.229 [2024-07-24 21:52:34.179312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.229 [2024-07-24 21:52:34.179331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.229 [2024-07-24 21:52:34.179338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.229 [2024-07-24 21:52:34.179343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.229 [2024-07-24 21:52:34.179359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.229 qpair failed and we were unable to recover it. 00:27:26.229 [2024-07-24 21:52:34.189262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.229 [2024-07-24 21:52:34.189428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.229 [2024-07-24 21:52:34.189446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.229 [2024-07-24 21:52:34.189453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.229 [2024-07-24 21:52:34.189459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.229 [2024-07-24 21:52:34.189475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.229 qpair failed and we were unable to recover it. 00:27:26.229 [2024-07-24 21:52:34.199162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.229 [2024-07-24 21:52:34.199301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.229 [2024-07-24 21:52:34.199320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.229 [2024-07-24 21:52:34.199326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.229 [2024-07-24 21:52:34.199332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.229 [2024-07-24 21:52:34.199348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.229 qpair failed and we were unable to recover it. 00:27:26.229 [2024-07-24 21:52:34.209297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.229 [2024-07-24 21:52:34.209440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.229 [2024-07-24 21:52:34.209458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.229 [2024-07-24 21:52:34.209464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.229 [2024-07-24 21:52:34.209471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.229 [2024-07-24 21:52:34.209487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.229 qpair failed and we were unable to recover it. 00:27:26.229 [2024-07-24 21:52:34.219336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.229 [2024-07-24 21:52:34.219470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.229 [2024-07-24 21:52:34.219488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.229 [2024-07-24 21:52:34.219495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.229 [2024-07-24 21:52:34.219501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.229 [2024-07-24 21:52:34.219517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.229 qpair failed and we were unable to recover it. 00:27:26.229 [2024-07-24 21:52:34.229315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.229 [2024-07-24 21:52:34.229451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.229 [2024-07-24 21:52:34.229469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.229 [2024-07-24 21:52:34.229476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.229 [2024-07-24 21:52:34.229482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.229 [2024-07-24 21:52:34.229498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.229 qpair failed and we were unable to recover it. 00:27:26.229 [2024-07-24 21:52:34.239359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.229 [2024-07-24 21:52:34.239495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.229 [2024-07-24 21:52:34.239513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.229 [2024-07-24 21:52:34.239520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.229 [2024-07-24 21:52:34.239526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.229 [2024-07-24 21:52:34.239543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.229 qpair failed and we were unable to recover it. 00:27:26.229 [2024-07-24 21:52:34.249408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.229 [2024-07-24 21:52:34.249544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.229 [2024-07-24 21:52:34.249563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.229 [2024-07-24 21:52:34.249570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.229 [2024-07-24 21:52:34.249576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.229 [2024-07-24 21:52:34.249593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.229 qpair failed and we were unable to recover it. 00:27:26.229 [2024-07-24 21:52:34.259407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.229 [2024-07-24 21:52:34.259555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.229 [2024-07-24 21:52:34.259576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.229 [2024-07-24 21:52:34.259583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.229 [2024-07-24 21:52:34.259589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.229 [2024-07-24 21:52:34.259605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.229 qpair failed and we were unable to recover it. 00:27:26.229 [2024-07-24 21:52:34.269352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.229 [2024-07-24 21:52:34.269492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.229 [2024-07-24 21:52:34.269511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.229 [2024-07-24 21:52:34.269518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.229 [2024-07-24 21:52:34.269524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.229 [2024-07-24 21:52:34.269540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.229 qpair failed and we were unable to recover it. 00:27:26.229 [2024-07-24 21:52:34.279465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.229 [2024-07-24 21:52:34.279596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.229 [2024-07-24 21:52:34.279614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.229 [2024-07-24 21:52:34.279621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.229 [2024-07-24 21:52:34.279627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.229 [2024-07-24 21:52:34.279644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.229 qpair failed and we were unable to recover it. 00:27:26.229 [2024-07-24 21:52:34.289713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.230 [2024-07-24 21:52:34.289853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.230 [2024-07-24 21:52:34.289871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.230 [2024-07-24 21:52:34.289878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.230 [2024-07-24 21:52:34.289884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.230 [2024-07-24 21:52:34.289900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.230 qpair failed and we were unable to recover it. 00:27:26.230 [2024-07-24 21:52:34.299513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.230 [2024-07-24 21:52:34.299649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.230 [2024-07-24 21:52:34.299668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.230 [2024-07-24 21:52:34.299675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.230 [2024-07-24 21:52:34.299681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.230 [2024-07-24 21:52:34.299701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.230 qpair failed and we were unable to recover it. 00:27:26.230 [2024-07-24 21:52:34.309543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.230 [2024-07-24 21:52:34.309678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.230 [2024-07-24 21:52:34.309696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.230 [2024-07-24 21:52:34.309703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.230 [2024-07-24 21:52:34.309709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.230 [2024-07-24 21:52:34.309726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.230 qpair failed and we were unable to recover it. 00:27:26.230 [2024-07-24 21:52:34.319510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.230 [2024-07-24 21:52:34.319643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.230 [2024-07-24 21:52:34.319662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.230 [2024-07-24 21:52:34.319669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.230 [2024-07-24 21:52:34.319675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.230 [2024-07-24 21:52:34.319691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.230 qpair failed and we were unable to recover it. 00:27:26.230 [2024-07-24 21:52:34.329577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.230 [2024-07-24 21:52:34.329712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.230 [2024-07-24 21:52:34.329730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.230 [2024-07-24 21:52:34.329737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.230 [2024-07-24 21:52:34.329743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.230 [2024-07-24 21:52:34.329760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.230 qpair failed and we were unable to recover it. 00:27:26.230 [2024-07-24 21:52:34.339602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.230 [2024-07-24 21:52:34.339735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.230 [2024-07-24 21:52:34.339754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.230 [2024-07-24 21:52:34.339761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.230 [2024-07-24 21:52:34.339767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.230 [2024-07-24 21:52:34.339783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.230 qpair failed and we were unable to recover it. 00:27:26.490 [2024-07-24 21:52:34.349704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.490 [2024-07-24 21:52:34.349838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.490 [2024-07-24 21:52:34.349861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.490 [2024-07-24 21:52:34.349867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.490 [2024-07-24 21:52:34.349873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.490 [2024-07-24 21:52:34.349890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.490 qpair failed and we were unable to recover it. 00:27:26.490 [2024-07-24 21:52:34.359695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.490 [2024-07-24 21:52:34.359827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.490 [2024-07-24 21:52:34.359845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.490 [2024-07-24 21:52:34.359852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.490 [2024-07-24 21:52:34.359858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.490 [2024-07-24 21:52:34.359875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.490 qpair failed and we were unable to recover it. 00:27:26.490 [2024-07-24 21:52:34.369736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.490 [2024-07-24 21:52:34.369876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.490 [2024-07-24 21:52:34.369894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.490 [2024-07-24 21:52:34.369901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.490 [2024-07-24 21:52:34.369906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.490 [2024-07-24 21:52:34.369922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.490 qpair failed and we were unable to recover it. 00:27:26.490 [2024-07-24 21:52:34.379751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.490 [2024-07-24 21:52:34.379905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.490 [2024-07-24 21:52:34.379923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.490 [2024-07-24 21:52:34.379930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.490 [2024-07-24 21:52:34.379936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.490 [2024-07-24 21:52:34.379952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.490 qpair failed and we were unable to recover it. 00:27:26.490 [2024-07-24 21:52:34.389812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.490 [2024-07-24 21:52:34.389974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.490 [2024-07-24 21:52:34.389992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.490 [2024-07-24 21:52:34.389999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.490 [2024-07-24 21:52:34.390005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.490 [2024-07-24 21:52:34.390025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.490 qpair failed and we were unable to recover it. 00:27:26.490 [2024-07-24 21:52:34.399820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.490 [2024-07-24 21:52:34.399958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.490 [2024-07-24 21:52:34.399977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.490 [2024-07-24 21:52:34.399984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.490 [2024-07-24 21:52:34.399990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.490 [2024-07-24 21:52:34.400007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.490 qpair failed and we were unable to recover it. 00:27:26.490 [2024-07-24 21:52:34.409854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.490 [2024-07-24 21:52:34.409988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.490 [2024-07-24 21:52:34.410006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.490 [2024-07-24 21:52:34.410013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.490 [2024-07-24 21:52:34.410019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.490 [2024-07-24 21:52:34.410035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.490 qpair failed and we were unable to recover it. 00:27:26.490 [2024-07-24 21:52:34.419883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.490 [2024-07-24 21:52:34.420022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.490 [2024-07-24 21:52:34.420040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.490 [2024-07-24 21:52:34.420053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.490 [2024-07-24 21:52:34.420059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.490 [2024-07-24 21:52:34.420076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.490 qpair failed and we were unable to recover it. 00:27:26.490 [2024-07-24 21:52:34.429994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.490 [2024-07-24 21:52:34.430135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.490 [2024-07-24 21:52:34.430154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.490 [2024-07-24 21:52:34.430161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.490 [2024-07-24 21:52:34.430168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.490 [2024-07-24 21:52:34.430184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.490 qpair failed and we were unable to recover it. 00:27:26.490 [2024-07-24 21:52:34.439968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.490 [2024-07-24 21:52:34.440108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.490 [2024-07-24 21:52:34.440130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.491 [2024-07-24 21:52:34.440137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.491 [2024-07-24 21:52:34.440143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.491 [2024-07-24 21:52:34.440161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.491 qpair failed and we were unable to recover it. 00:27:26.491 [2024-07-24 21:52:34.449978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.491 [2024-07-24 21:52:34.450123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.491 [2024-07-24 21:52:34.450142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.491 [2024-07-24 21:52:34.450149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.491 [2024-07-24 21:52:34.450155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.491 [2024-07-24 21:52:34.450171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.491 qpair failed and we were unable to recover it. 00:27:26.491 [2024-07-24 21:52:34.460009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.491 [2024-07-24 21:52:34.460148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.491 [2024-07-24 21:52:34.460166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.491 [2024-07-24 21:52:34.460174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.491 [2024-07-24 21:52:34.460180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.491 [2024-07-24 21:52:34.460197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.491 qpair failed and we were unable to recover it. 00:27:26.491 [2024-07-24 21:52:34.469950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.491 [2024-07-24 21:52:34.470106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.491 [2024-07-24 21:52:34.470126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.491 [2024-07-24 21:52:34.470134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.491 [2024-07-24 21:52:34.470140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.491 [2024-07-24 21:52:34.470157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.491 qpair failed and we were unable to recover it. 00:27:26.491 [2024-07-24 21:52:34.480067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.491 [2024-07-24 21:52:34.480209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.491 [2024-07-24 21:52:34.480228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.491 [2024-07-24 21:52:34.480234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.491 [2024-07-24 21:52:34.480241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.491 [2024-07-24 21:52:34.480261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.491 qpair failed and we were unable to recover it. 00:27:26.491 [2024-07-24 21:52:34.490098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.491 [2024-07-24 21:52:34.490235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.491 [2024-07-24 21:52:34.490253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.491 [2024-07-24 21:52:34.490260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.491 [2024-07-24 21:52:34.490267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.491 [2024-07-24 21:52:34.490283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.491 qpair failed and we were unable to recover it. 00:27:26.491 [2024-07-24 21:52:34.500126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.491 [2024-07-24 21:52:34.500263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.491 [2024-07-24 21:52:34.500282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.491 [2024-07-24 21:52:34.500289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.491 [2024-07-24 21:52:34.500295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.491 [2024-07-24 21:52:34.500311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.491 qpair failed and we were unable to recover it. 00:27:26.491 [2024-07-24 21:52:34.510156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.491 [2024-07-24 21:52:34.510294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.491 [2024-07-24 21:52:34.510313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.491 [2024-07-24 21:52:34.510319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.491 [2024-07-24 21:52:34.510325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.491 [2024-07-24 21:52:34.510341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.491 qpair failed and we were unable to recover it. 00:27:26.491 [2024-07-24 21:52:34.520174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.491 [2024-07-24 21:52:34.520311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.491 [2024-07-24 21:52:34.520330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.491 [2024-07-24 21:52:34.520337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.491 [2024-07-24 21:52:34.520343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.491 [2024-07-24 21:52:34.520360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.491 qpair failed and we were unable to recover it. 00:27:26.491 [2024-07-24 21:52:34.530137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.491 [2024-07-24 21:52:34.530276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.491 [2024-07-24 21:52:34.530298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.491 [2024-07-24 21:52:34.530305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.491 [2024-07-24 21:52:34.530311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.491 [2024-07-24 21:52:34.530328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.491 qpair failed and we were unable to recover it. 00:27:26.491 [2024-07-24 21:52:34.540236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.491 [2024-07-24 21:52:34.540369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.491 [2024-07-24 21:52:34.540388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.491 [2024-07-24 21:52:34.540395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.491 [2024-07-24 21:52:34.540402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.491 [2024-07-24 21:52:34.540418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.491 qpair failed and we were unable to recover it. 00:27:26.491 [2024-07-24 21:52:34.550272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.491 [2024-07-24 21:52:34.550408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.491 [2024-07-24 21:52:34.550427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.491 [2024-07-24 21:52:34.550434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.491 [2024-07-24 21:52:34.550440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.491 [2024-07-24 21:52:34.550457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.491 qpair failed and we were unable to recover it. 00:27:26.491 [2024-07-24 21:52:34.560341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.491 [2024-07-24 21:52:34.560490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.491 [2024-07-24 21:52:34.560508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.491 [2024-07-24 21:52:34.560516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.491 [2024-07-24 21:52:34.560522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.491 [2024-07-24 21:52:34.560538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.491 qpair failed and we were unable to recover it. 00:27:26.491 [2024-07-24 21:52:34.570255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.491 [2024-07-24 21:52:34.570426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.491 [2024-07-24 21:52:34.570444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.491 [2024-07-24 21:52:34.570451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.492 [2024-07-24 21:52:34.570464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.492 [2024-07-24 21:52:34.570480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.492 qpair failed and we were unable to recover it. 00:27:26.492 [2024-07-24 21:52:34.580373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.492 [2024-07-24 21:52:34.580521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.492 [2024-07-24 21:52:34.580540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.492 [2024-07-24 21:52:34.580547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.492 [2024-07-24 21:52:34.580554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.492 [2024-07-24 21:52:34.580571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.492 qpair failed and we were unable to recover it. 00:27:26.492 [2024-07-24 21:52:34.590304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.492 [2024-07-24 21:52:34.590438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.492 [2024-07-24 21:52:34.590457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.492 [2024-07-24 21:52:34.590464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.492 [2024-07-24 21:52:34.590470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.492 [2024-07-24 21:52:34.590487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.492 qpair failed and we were unable to recover it. 00:27:26.492 [2024-07-24 21:52:34.600473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.492 [2024-07-24 21:52:34.600654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.492 [2024-07-24 21:52:34.600672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.492 [2024-07-24 21:52:34.600679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.492 [2024-07-24 21:52:34.600685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.492 [2024-07-24 21:52:34.600701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.492 qpair failed and we were unable to recover it. 00:27:26.753 [2024-07-24 21:52:34.610376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.753 [2024-07-24 21:52:34.610515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.753 [2024-07-24 21:52:34.610534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.753 [2024-07-24 21:52:34.610541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.753 [2024-07-24 21:52:34.610547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.753 [2024-07-24 21:52:34.610563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.753 qpair failed and we were unable to recover it. 00:27:26.753 [2024-07-24 21:52:34.620467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.753 [2024-07-24 21:52:34.620632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.753 [2024-07-24 21:52:34.620650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.753 [2024-07-24 21:52:34.620659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.753 [2024-07-24 21:52:34.620667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.753 [2024-07-24 21:52:34.620683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.753 qpair failed and we were unable to recover it. 00:27:26.753 [2024-07-24 21:52:34.630491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.753 [2024-07-24 21:52:34.630632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.753 [2024-07-24 21:52:34.630651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.753 [2024-07-24 21:52:34.630658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.753 [2024-07-24 21:52:34.630664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.753 [2024-07-24 21:52:34.630680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.753 qpair failed and we were unable to recover it. 00:27:26.753 [2024-07-24 21:52:34.640532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.753 [2024-07-24 21:52:34.640676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.753 [2024-07-24 21:52:34.640694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.753 [2024-07-24 21:52:34.640701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.753 [2024-07-24 21:52:34.640707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.753 [2024-07-24 21:52:34.640724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.753 qpair failed and we were unable to recover it. 00:27:26.753 [2024-07-24 21:52:34.650558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.753 [2024-07-24 21:52:34.650698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.753 [2024-07-24 21:52:34.650716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.753 [2024-07-24 21:52:34.650723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.753 [2024-07-24 21:52:34.650729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.753 [2024-07-24 21:52:34.650746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.753 qpair failed and we were unable to recover it. 00:27:26.753 [2024-07-24 21:52:34.660584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.753 [2024-07-24 21:52:34.660720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.753 [2024-07-24 21:52:34.660738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.753 [2024-07-24 21:52:34.660745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.753 [2024-07-24 21:52:34.660755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.753 [2024-07-24 21:52:34.660771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.753 qpair failed and we were unable to recover it. 00:27:26.753 [2024-07-24 21:52:34.670631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.753 [2024-07-24 21:52:34.670797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.753 [2024-07-24 21:52:34.670815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.753 [2024-07-24 21:52:34.670822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.753 [2024-07-24 21:52:34.670828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.753 [2024-07-24 21:52:34.670844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.753 qpair failed and we were unable to recover it. 00:27:26.753 [2024-07-24 21:52:34.680641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.753 [2024-07-24 21:52:34.680776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.753 [2024-07-24 21:52:34.680795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.753 [2024-07-24 21:52:34.680802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.753 [2024-07-24 21:52:34.680808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.753 [2024-07-24 21:52:34.680824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.753 qpair failed and we were unable to recover it. 00:27:26.753 [2024-07-24 21:52:34.690678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.753 [2024-07-24 21:52:34.690815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.753 [2024-07-24 21:52:34.690834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.753 [2024-07-24 21:52:34.690840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.753 [2024-07-24 21:52:34.690846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.753 [2024-07-24 21:52:34.690863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.753 qpair failed and we were unable to recover it. 00:27:26.753 [2024-07-24 21:52:34.700735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.753 [2024-07-24 21:52:34.700897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.754 [2024-07-24 21:52:34.700916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.754 [2024-07-24 21:52:34.700923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.754 [2024-07-24 21:52:34.700929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.754 [2024-07-24 21:52:34.700945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.754 qpair failed and we were unable to recover it. 00:27:26.754 [2024-07-24 21:52:34.710826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.754 [2024-07-24 21:52:34.711008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.754 [2024-07-24 21:52:34.711026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.754 [2024-07-24 21:52:34.711034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.754 [2024-07-24 21:52:34.711039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.754 [2024-07-24 21:52:34.711062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.754 qpair failed and we were unable to recover it. 00:27:26.754 [2024-07-24 21:52:34.720717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.754 [2024-07-24 21:52:34.720901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.754 [2024-07-24 21:52:34.720919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.754 [2024-07-24 21:52:34.720927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.754 [2024-07-24 21:52:34.720932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.754 [2024-07-24 21:52:34.720948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.754 qpair failed and we were unable to recover it. 00:27:26.754 [2024-07-24 21:52:34.730784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.754 [2024-07-24 21:52:34.730926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.754 [2024-07-24 21:52:34.730944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.754 [2024-07-24 21:52:34.730951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.754 [2024-07-24 21:52:34.730957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.754 [2024-07-24 21:52:34.730974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.754 qpair failed and we were unable to recover it. 00:27:26.754 [2024-07-24 21:52:34.740806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.754 [2024-07-24 21:52:34.740942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.754 [2024-07-24 21:52:34.740961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.754 [2024-07-24 21:52:34.740968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.754 [2024-07-24 21:52:34.740973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.754 [2024-07-24 21:52:34.740990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.754 qpair failed and we were unable to recover it. 00:27:26.754 [2024-07-24 21:52:34.750828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.754 [2024-07-24 21:52:34.750961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.754 [2024-07-24 21:52:34.750980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.754 [2024-07-24 21:52:34.750986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.754 [2024-07-24 21:52:34.750996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.754 [2024-07-24 21:52:34.751012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.754 qpair failed and we were unable to recover it. 00:27:26.754 [2024-07-24 21:52:34.760861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.754 [2024-07-24 21:52:34.761000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.754 [2024-07-24 21:52:34.761018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.754 [2024-07-24 21:52:34.761025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.754 [2024-07-24 21:52:34.761031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.754 [2024-07-24 21:52:34.761055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.754 qpair failed and we were unable to recover it. 00:27:26.754 [2024-07-24 21:52:34.770933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.754 [2024-07-24 21:52:34.771088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.754 [2024-07-24 21:52:34.771106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.754 [2024-07-24 21:52:34.771113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.754 [2024-07-24 21:52:34.771119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.754 [2024-07-24 21:52:34.771135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.754 qpair failed and we were unable to recover it. 00:27:26.754 [2024-07-24 21:52:34.780926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.754 [2024-07-24 21:52:34.781070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.754 [2024-07-24 21:52:34.781089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.754 [2024-07-24 21:52:34.781096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.754 [2024-07-24 21:52:34.781102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.754 [2024-07-24 21:52:34.781118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.754 qpair failed and we were unable to recover it. 00:27:26.754 [2024-07-24 21:52:34.790949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.754 [2024-07-24 21:52:34.791093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.754 [2024-07-24 21:52:34.791112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.754 [2024-07-24 21:52:34.791119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.754 [2024-07-24 21:52:34.791125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.754 [2024-07-24 21:52:34.791141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.754 qpair failed and we were unable to recover it. 00:27:26.754 [2024-07-24 21:52:34.800976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.754 [2024-07-24 21:52:34.801151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.754 [2024-07-24 21:52:34.801169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.754 [2024-07-24 21:52:34.801176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.754 [2024-07-24 21:52:34.801182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.754 [2024-07-24 21:52:34.801198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.754 qpair failed and we were unable to recover it. 00:27:26.754 [2024-07-24 21:52:34.811023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.754 [2024-07-24 21:52:34.811165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.754 [2024-07-24 21:52:34.811183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.754 [2024-07-24 21:52:34.811190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.754 [2024-07-24 21:52:34.811196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.754 [2024-07-24 21:52:34.811212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.754 qpair failed and we were unable to recover it. 00:27:26.754 [2024-07-24 21:52:34.821056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.754 [2024-07-24 21:52:34.821199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.754 [2024-07-24 21:52:34.821217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.754 [2024-07-24 21:52:34.821224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.754 [2024-07-24 21:52:34.821230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.754 [2024-07-24 21:52:34.821246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.754 qpair failed and we were unable to recover it. 00:27:26.754 [2024-07-24 21:52:34.831087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.754 [2024-07-24 21:52:34.831225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.755 [2024-07-24 21:52:34.831244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.755 [2024-07-24 21:52:34.831251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.755 [2024-07-24 21:52:34.831257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.755 [2024-07-24 21:52:34.831273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.755 qpair failed and we were unable to recover it. 00:27:26.755 [2024-07-24 21:52:34.841035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.755 [2024-07-24 21:52:34.841187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.755 [2024-07-24 21:52:34.841205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.755 [2024-07-24 21:52:34.841216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.755 [2024-07-24 21:52:34.841222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.755 [2024-07-24 21:52:34.841239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.755 qpair failed and we were unable to recover it. 00:27:26.755 [2024-07-24 21:52:34.851144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.755 [2024-07-24 21:52:34.851281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.755 [2024-07-24 21:52:34.851300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.755 [2024-07-24 21:52:34.851306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.755 [2024-07-24 21:52:34.851313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.755 [2024-07-24 21:52:34.851329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.755 qpair failed and we were unable to recover it. 00:27:26.755 [2024-07-24 21:52:34.861206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.755 [2024-07-24 21:52:34.861344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.755 [2024-07-24 21:52:34.861362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.755 [2024-07-24 21:52:34.861369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.755 [2024-07-24 21:52:34.861376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:26.755 [2024-07-24 21:52:34.861393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.755 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-24 21:52:34.871216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.016 [2024-07-24 21:52:34.871361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.016 [2024-07-24 21:52:34.871379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.016 [2024-07-24 21:52:34.871386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.016 [2024-07-24 21:52:34.871393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.016 [2024-07-24 21:52:34.871409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-24 21:52:34.881232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.016 [2024-07-24 21:52:34.881372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.016 [2024-07-24 21:52:34.881390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.016 [2024-07-24 21:52:34.881397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.016 [2024-07-24 21:52:34.881403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.016 [2024-07-24 21:52:34.881420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-24 21:52:34.891245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.016 [2024-07-24 21:52:34.891381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.016 [2024-07-24 21:52:34.891400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.016 [2024-07-24 21:52:34.891407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.016 [2024-07-24 21:52:34.891413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.016 [2024-07-24 21:52:34.891429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-24 21:52:34.901272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.016 [2024-07-24 21:52:34.901411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.016 [2024-07-24 21:52:34.901430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.016 [2024-07-24 21:52:34.901437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.016 [2024-07-24 21:52:34.901443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.016 [2024-07-24 21:52:34.901460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-24 21:52:34.911301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.016 [2024-07-24 21:52:34.911437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.016 [2024-07-24 21:52:34.911455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.016 [2024-07-24 21:52:34.911462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.016 [2024-07-24 21:52:34.911468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.016 [2024-07-24 21:52:34.911484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-24 21:52:34.921300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.016 [2024-07-24 21:52:34.921429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.016 [2024-07-24 21:52:34.921447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.016 [2024-07-24 21:52:34.921454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.016 [2024-07-24 21:52:34.921460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.016 [2024-07-24 21:52:34.921477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-24 21:52:34.931364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.016 [2024-07-24 21:52:34.931534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.016 [2024-07-24 21:52:34.931553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.016 [2024-07-24 21:52:34.931564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.016 [2024-07-24 21:52:34.931570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.016 [2024-07-24 21:52:34.931586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-24 21:52:34.941316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.017 [2024-07-24 21:52:34.941456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.017 [2024-07-24 21:52:34.941475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.017 [2024-07-24 21:52:34.941481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.017 [2024-07-24 21:52:34.941488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.017 [2024-07-24 21:52:34.941504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-24 21:52:34.951408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.017 [2024-07-24 21:52:34.951545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.017 [2024-07-24 21:52:34.951563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.017 [2024-07-24 21:52:34.951570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.017 [2024-07-24 21:52:34.951576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.017 [2024-07-24 21:52:34.951592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-24 21:52:34.961565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.017 [2024-07-24 21:52:34.961701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.017 [2024-07-24 21:52:34.961719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.017 [2024-07-24 21:52:34.961726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.017 [2024-07-24 21:52:34.961732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.017 [2024-07-24 21:52:34.961748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-24 21:52:34.971401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.017 [2024-07-24 21:52:34.971553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.017 [2024-07-24 21:52:34.971572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.017 [2024-07-24 21:52:34.971579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.017 [2024-07-24 21:52:34.971585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.017 [2024-07-24 21:52:34.971601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-24 21:52:34.981547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.017 [2024-07-24 21:52:34.981689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.017 [2024-07-24 21:52:34.981708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.017 [2024-07-24 21:52:34.981715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.017 [2024-07-24 21:52:34.981721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.017 [2024-07-24 21:52:34.981737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-24 21:52:34.991459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.017 [2024-07-24 21:52:34.991597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.017 [2024-07-24 21:52:34.991616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.017 [2024-07-24 21:52:34.991623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.017 [2024-07-24 21:52:34.991628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.017 [2024-07-24 21:52:34.991645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-24 21:52:35.001538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.017 [2024-07-24 21:52:35.001676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.017 [2024-07-24 21:52:35.001695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.017 [2024-07-24 21:52:35.001702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.017 [2024-07-24 21:52:35.001708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.017 [2024-07-24 21:52:35.001724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-24 21:52:35.011529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.017 [2024-07-24 21:52:35.011666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.017 [2024-07-24 21:52:35.011684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.017 [2024-07-24 21:52:35.011692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.017 [2024-07-24 21:52:35.011697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.017 [2024-07-24 21:52:35.011713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-24 21:52:35.021600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.017 [2024-07-24 21:52:35.021738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.017 [2024-07-24 21:52:35.021756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.017 [2024-07-24 21:52:35.021766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.017 [2024-07-24 21:52:35.021772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.017 [2024-07-24 21:52:35.021788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-24 21:52:35.031640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.017 [2024-07-24 21:52:35.031776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.017 [2024-07-24 21:52:35.031794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.017 [2024-07-24 21:52:35.031801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.017 [2024-07-24 21:52:35.031807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.017 [2024-07-24 21:52:35.031824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-24 21:52:35.041697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.017 [2024-07-24 21:52:35.041833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.017 [2024-07-24 21:52:35.041852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.017 [2024-07-24 21:52:35.041859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.017 [2024-07-24 21:52:35.041865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.017 [2024-07-24 21:52:35.041881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-24 21:52:35.051649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.017 [2024-07-24 21:52:35.051806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.017 [2024-07-24 21:52:35.051825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.017 [2024-07-24 21:52:35.051831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.017 [2024-07-24 21:52:35.051837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.017 [2024-07-24 21:52:35.051853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-24 21:52:35.061763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.017 [2024-07-24 21:52:35.061915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.017 [2024-07-24 21:52:35.061933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.017 [2024-07-24 21:52:35.061940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.017 [2024-07-24 21:52:35.061945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.017 [2024-07-24 21:52:35.061962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-24 21:52:35.071790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.017 [2024-07-24 21:52:35.071929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.018 [2024-07-24 21:52:35.071947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.018 [2024-07-24 21:52:35.071955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.018 [2024-07-24 21:52:35.071961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.018 [2024-07-24 21:52:35.071977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-24 21:52:35.081712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.018 [2024-07-24 21:52:35.081860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.018 [2024-07-24 21:52:35.081878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.018 [2024-07-24 21:52:35.081885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.018 [2024-07-24 21:52:35.081892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.018 [2024-07-24 21:52:35.081908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-24 21:52:35.091795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.018 [2024-07-24 21:52:35.091929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.018 [2024-07-24 21:52:35.091948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.018 [2024-07-24 21:52:35.091955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.018 [2024-07-24 21:52:35.091961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.018 [2024-07-24 21:52:35.091977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-24 21:52:35.101766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.018 [2024-07-24 21:52:35.101908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.018 [2024-07-24 21:52:35.101927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.018 [2024-07-24 21:52:35.101933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.018 [2024-07-24 21:52:35.101940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.018 [2024-07-24 21:52:35.101956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-24 21:52:35.111868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.018 [2024-07-24 21:52:35.112009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.018 [2024-07-24 21:52:35.112028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.018 [2024-07-24 21:52:35.112039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.018 [2024-07-24 21:52:35.112050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.018 [2024-07-24 21:52:35.112067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-24 21:52:35.121890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.018 [2024-07-24 21:52:35.122024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.018 [2024-07-24 21:52:35.122049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.018 [2024-07-24 21:52:35.122057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.018 [2024-07-24 21:52:35.122063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.018 [2024-07-24 21:52:35.122079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.280 [2024-07-24 21:52:35.131907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.280 [2024-07-24 21:52:35.132050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.280 [2024-07-24 21:52:35.132069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.280 [2024-07-24 21:52:35.132077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.280 [2024-07-24 21:52:35.132083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.280 [2024-07-24 21:52:35.132099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-07-24 21:52:35.141991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.280 [2024-07-24 21:52:35.142139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.280 [2024-07-24 21:52:35.142158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.280 [2024-07-24 21:52:35.142165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.280 [2024-07-24 21:52:35.142171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.280 [2024-07-24 21:52:35.142188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-07-24 21:52:35.151968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.280 [2024-07-24 21:52:35.152119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.280 [2024-07-24 21:52:35.152138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.280 [2024-07-24 21:52:35.152145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.280 [2024-07-24 21:52:35.152150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.280 [2024-07-24 21:52:35.152166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-07-24 21:52:35.161956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.280 [2024-07-24 21:52:35.162096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.280 [2024-07-24 21:52:35.162115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.280 [2024-07-24 21:52:35.162122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.280 [2024-07-24 21:52:35.162128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.280 [2024-07-24 21:52:35.162144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-07-24 21:52:35.172061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.280 [2024-07-24 21:52:35.172196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.280 [2024-07-24 21:52:35.172214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.280 [2024-07-24 21:52:35.172221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.280 [2024-07-24 21:52:35.172227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.280 [2024-07-24 21:52:35.172243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-07-24 21:52:35.182007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.280 [2024-07-24 21:52:35.182147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.280 [2024-07-24 21:52:35.182165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.280 [2024-07-24 21:52:35.182172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.280 [2024-07-24 21:52:35.182178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.280 [2024-07-24 21:52:35.182195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-07-24 21:52:35.192128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.280 [2024-07-24 21:52:35.192263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.280 [2024-07-24 21:52:35.192282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.280 [2024-07-24 21:52:35.192289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.280 [2024-07-24 21:52:35.192296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.280 [2024-07-24 21:52:35.192312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-07-24 21:52:35.202151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.280 [2024-07-24 21:52:35.202288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.280 [2024-07-24 21:52:35.202310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.280 [2024-07-24 21:52:35.202317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.280 [2024-07-24 21:52:35.202323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.280 [2024-07-24 21:52:35.202340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-07-24 21:52:35.212169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.280 [2024-07-24 21:52:35.212308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.280 [2024-07-24 21:52:35.212327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.280 [2024-07-24 21:52:35.212334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.280 [2024-07-24 21:52:35.212340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.280 [2024-07-24 21:52:35.212356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-07-24 21:52:35.222208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.280 [2024-07-24 21:52:35.222345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.280 [2024-07-24 21:52:35.222364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.280 [2024-07-24 21:52:35.222371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.280 [2024-07-24 21:52:35.222377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.280 [2024-07-24 21:52:35.222394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-07-24 21:52:35.232209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.280 [2024-07-24 21:52:35.232349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.280 [2024-07-24 21:52:35.232368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.280 [2024-07-24 21:52:35.232375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.281 [2024-07-24 21:52:35.232381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.281 [2024-07-24 21:52:35.232397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-07-24 21:52:35.242269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.281 [2024-07-24 21:52:35.242417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.281 [2024-07-24 21:52:35.242435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.281 [2024-07-24 21:52:35.242442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.281 [2024-07-24 21:52:35.242448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.281 [2024-07-24 21:52:35.242465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-07-24 21:52:35.252227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.281 [2024-07-24 21:52:35.252362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.281 [2024-07-24 21:52:35.252381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.281 [2024-07-24 21:52:35.252388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.281 [2024-07-24 21:52:35.252394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.281 [2024-07-24 21:52:35.252410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-07-24 21:52:35.262248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.281 [2024-07-24 21:52:35.262385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.281 [2024-07-24 21:52:35.262403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.281 [2024-07-24 21:52:35.262410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.281 [2024-07-24 21:52:35.262416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.281 [2024-07-24 21:52:35.262432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-07-24 21:52:35.272320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.281 [2024-07-24 21:52:35.272456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.281 [2024-07-24 21:52:35.272475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.281 [2024-07-24 21:52:35.272482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.281 [2024-07-24 21:52:35.272489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.281 [2024-07-24 21:52:35.272505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-07-24 21:52:35.282360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.281 [2024-07-24 21:52:35.282498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.281 [2024-07-24 21:52:35.282516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.281 [2024-07-24 21:52:35.282523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.281 [2024-07-24 21:52:35.282529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.281 [2024-07-24 21:52:35.282546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-07-24 21:52:35.292396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.281 [2024-07-24 21:52:35.292571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.281 [2024-07-24 21:52:35.292593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.281 [2024-07-24 21:52:35.292600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.281 [2024-07-24 21:52:35.292606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.281 [2024-07-24 21:52:35.292621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-07-24 21:52:35.302405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.281 [2024-07-24 21:52:35.302726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.281 [2024-07-24 21:52:35.302745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.281 [2024-07-24 21:52:35.302752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.281 [2024-07-24 21:52:35.302758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.281 [2024-07-24 21:52:35.302773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-07-24 21:52:35.312455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.281 [2024-07-24 21:52:35.312591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.281 [2024-07-24 21:52:35.312609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.281 [2024-07-24 21:52:35.312616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.281 [2024-07-24 21:52:35.312622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.281 [2024-07-24 21:52:35.312638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-07-24 21:52:35.322493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.281 [2024-07-24 21:52:35.322629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.281 [2024-07-24 21:52:35.322648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.281 [2024-07-24 21:52:35.322654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.281 [2024-07-24 21:52:35.322660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.281 [2024-07-24 21:52:35.322677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-07-24 21:52:35.332519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.281 [2024-07-24 21:52:35.332657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.281 [2024-07-24 21:52:35.332675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.281 [2024-07-24 21:52:35.332683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.281 [2024-07-24 21:52:35.332688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.281 [2024-07-24 21:52:35.332708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-07-24 21:52:35.342537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.281 [2024-07-24 21:52:35.342676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.281 [2024-07-24 21:52:35.342694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.281 [2024-07-24 21:52:35.342701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.281 [2024-07-24 21:52:35.342707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.281 [2024-07-24 21:52:35.342724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-07-24 21:52:35.352604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.281 [2024-07-24 21:52:35.352738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.281 [2024-07-24 21:52:35.352756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.281 [2024-07-24 21:52:35.352763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.281 [2024-07-24 21:52:35.352769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.281 [2024-07-24 21:52:35.352786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-07-24 21:52:35.362607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.281 [2024-07-24 21:52:35.362742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.281 [2024-07-24 21:52:35.362761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.281 [2024-07-24 21:52:35.362768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.281 [2024-07-24 21:52:35.362774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.282 [2024-07-24 21:52:35.362790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-07-24 21:52:35.372633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.282 [2024-07-24 21:52:35.372770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.282 [2024-07-24 21:52:35.372789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.282 [2024-07-24 21:52:35.372795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.282 [2024-07-24 21:52:35.372802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.282 [2024-07-24 21:52:35.372818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-07-24 21:52:35.382634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.282 [2024-07-24 21:52:35.382774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.282 [2024-07-24 21:52:35.382800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.282 [2024-07-24 21:52:35.382807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.282 [2024-07-24 21:52:35.382813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.282 [2024-07-24 21:52:35.382829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-07-24 21:52:35.392737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.282 [2024-07-24 21:52:35.392903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.282 [2024-07-24 21:52:35.392921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.282 [2024-07-24 21:52:35.392928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.282 [2024-07-24 21:52:35.392934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.282 [2024-07-24 21:52:35.392951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.543 [2024-07-24 21:52:35.402715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.543 [2024-07-24 21:52:35.402852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.543 [2024-07-24 21:52:35.402871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.543 [2024-07-24 21:52:35.402879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.543 [2024-07-24 21:52:35.402885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.543 [2024-07-24 21:52:35.402901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-07-24 21:52:35.412756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.543 [2024-07-24 21:52:35.412895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.543 [2024-07-24 21:52:35.412914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.543 [2024-07-24 21:52:35.412921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.543 [2024-07-24 21:52:35.412927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.543 [2024-07-24 21:52:35.412943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-07-24 21:52:35.422775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.543 [2024-07-24 21:52:35.422928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.543 [2024-07-24 21:52:35.422947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.543 [2024-07-24 21:52:35.422954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.543 [2024-07-24 21:52:35.422960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.543 [2024-07-24 21:52:35.422980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-07-24 21:52:35.432812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.543 [2024-07-24 21:52:35.432955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.543 [2024-07-24 21:52:35.432974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.543 [2024-07-24 21:52:35.432981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.543 [2024-07-24 21:52:35.432987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.543 [2024-07-24 21:52:35.433003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-07-24 21:52:35.442857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.543 [2024-07-24 21:52:35.443019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.543 [2024-07-24 21:52:35.443037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.543 [2024-07-24 21:52:35.443050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.543 [2024-07-24 21:52:35.443056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.543 [2024-07-24 21:52:35.443072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-07-24 21:52:35.452874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.543 [2024-07-24 21:52:35.453014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.543 [2024-07-24 21:52:35.453032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.543 [2024-07-24 21:52:35.453040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.543 [2024-07-24 21:52:35.453053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.543 [2024-07-24 21:52:35.453069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-07-24 21:52:35.462893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.543 [2024-07-24 21:52:35.463033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.543 [2024-07-24 21:52:35.463057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.543 [2024-07-24 21:52:35.463065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.543 [2024-07-24 21:52:35.463071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.543 [2024-07-24 21:52:35.463088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-07-24 21:52:35.472928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.543 [2024-07-24 21:52:35.473072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.543 [2024-07-24 21:52:35.473095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.543 [2024-07-24 21:52:35.473102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.543 [2024-07-24 21:52:35.473108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.543 [2024-07-24 21:52:35.473126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-07-24 21:52:35.482924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.543 [2024-07-24 21:52:35.483067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.544 [2024-07-24 21:52:35.483086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.544 [2024-07-24 21:52:35.483093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.544 [2024-07-24 21:52:35.483099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.544 [2024-07-24 21:52:35.483115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-07-24 21:52:35.492988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.544 [2024-07-24 21:52:35.493131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.544 [2024-07-24 21:52:35.493150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.544 [2024-07-24 21:52:35.493156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.544 [2024-07-24 21:52:35.493162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.544 [2024-07-24 21:52:35.493179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-07-24 21:52:35.502996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.544 [2024-07-24 21:52:35.503141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.544 [2024-07-24 21:52:35.503160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.544 [2024-07-24 21:52:35.503167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.544 [2024-07-24 21:52:35.503173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.544 [2024-07-24 21:52:35.503189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-07-24 21:52:35.513040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.544 [2024-07-24 21:52:35.513174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.544 [2024-07-24 21:52:35.513194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.544 [2024-07-24 21:52:35.513201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.544 [2024-07-24 21:52:35.513210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.544 [2024-07-24 21:52:35.513227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-07-24 21:52:35.523069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.544 [2024-07-24 21:52:35.523258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.544 [2024-07-24 21:52:35.523276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.544 [2024-07-24 21:52:35.523283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.544 [2024-07-24 21:52:35.523289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.544 [2024-07-24 21:52:35.523306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-07-24 21:52:35.533126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.544 [2024-07-24 21:52:35.533451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.544 [2024-07-24 21:52:35.533470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.544 [2024-07-24 21:52:35.533476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.544 [2024-07-24 21:52:35.533482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.544 [2024-07-24 21:52:35.533498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-07-24 21:52:35.543109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.544 [2024-07-24 21:52:35.543249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.544 [2024-07-24 21:52:35.543268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.544 [2024-07-24 21:52:35.543275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.544 [2024-07-24 21:52:35.543280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.544 [2024-07-24 21:52:35.543297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-07-24 21:52:35.553357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.544 [2024-07-24 21:52:35.553503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.544 [2024-07-24 21:52:35.553522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.544 [2024-07-24 21:52:35.553528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.544 [2024-07-24 21:52:35.553534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.544 [2024-07-24 21:52:35.553551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-07-24 21:52:35.563153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.544 [2024-07-24 21:52:35.563325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.544 [2024-07-24 21:52:35.563346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.544 [2024-07-24 21:52:35.563353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.544 [2024-07-24 21:52:35.563359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.544 [2024-07-24 21:52:35.563376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-07-24 21:52:35.573203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.544 [2024-07-24 21:52:35.573345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.544 [2024-07-24 21:52:35.573363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.544 [2024-07-24 21:52:35.573370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.544 [2024-07-24 21:52:35.573376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.544 [2024-07-24 21:52:35.573392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-07-24 21:52:35.583218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.544 [2024-07-24 21:52:35.583353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.544 [2024-07-24 21:52:35.583372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.544 [2024-07-24 21:52:35.583379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.544 [2024-07-24 21:52:35.583385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.544 [2024-07-24 21:52:35.583402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-07-24 21:52:35.593284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.544 [2024-07-24 21:52:35.593413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.544 [2024-07-24 21:52:35.593431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.544 [2024-07-24 21:52:35.593438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.545 [2024-07-24 21:52:35.593444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.545 [2024-07-24 21:52:35.593460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-07-24 21:52:35.603282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.545 [2024-07-24 21:52:35.603411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.545 [2024-07-24 21:52:35.603429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.545 [2024-07-24 21:52:35.603436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.545 [2024-07-24 21:52:35.603446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.545 [2024-07-24 21:52:35.603462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-07-24 21:52:35.613319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.545 [2024-07-24 21:52:35.613455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.545 [2024-07-24 21:52:35.613474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.545 [2024-07-24 21:52:35.613481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.545 [2024-07-24 21:52:35.613486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.545 [2024-07-24 21:52:35.613503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-07-24 21:52:35.623350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.545 [2024-07-24 21:52:35.623680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.545 [2024-07-24 21:52:35.623697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.545 [2024-07-24 21:52:35.623704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.545 [2024-07-24 21:52:35.623710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.545 [2024-07-24 21:52:35.623725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-07-24 21:52:35.633301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.545 [2024-07-24 21:52:35.633434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.545 [2024-07-24 21:52:35.633453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.545 [2024-07-24 21:52:35.633461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.545 [2024-07-24 21:52:35.633467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.545 [2024-07-24 21:52:35.633483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-07-24 21:52:35.643400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.545 [2024-07-24 21:52:35.643540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.545 [2024-07-24 21:52:35.643559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.545 [2024-07-24 21:52:35.643566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.545 [2024-07-24 21:52:35.643572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.545 [2024-07-24 21:52:35.643588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-07-24 21:52:35.653403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.545 [2024-07-24 21:52:35.653540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.545 [2024-07-24 21:52:35.653559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.545 [2024-07-24 21:52:35.653566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.545 [2024-07-24 21:52:35.653572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.545 [2024-07-24 21:52:35.653589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.806 [2024-07-24 21:52:35.663450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.806 [2024-07-24 21:52:35.663592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.806 [2024-07-24 21:52:35.663611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.806 [2024-07-24 21:52:35.663618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.806 [2024-07-24 21:52:35.663624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.806 [2024-07-24 21:52:35.663641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.806 qpair failed and we were unable to recover it. 00:27:27.806 [2024-07-24 21:52:35.673386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.806 [2024-07-24 21:52:35.673531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.806 [2024-07-24 21:52:35.673549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.806 [2024-07-24 21:52:35.673556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.806 [2024-07-24 21:52:35.673562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.806 [2024-07-24 21:52:35.673578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.806 qpair failed and we were unable to recover it. 00:27:27.806 [2024-07-24 21:52:35.683502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.806 [2024-07-24 21:52:35.683633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.806 [2024-07-24 21:52:35.683652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.806 [2024-07-24 21:52:35.683659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.806 [2024-07-24 21:52:35.683665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.806 [2024-07-24 21:52:35.683681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.806 qpair failed and we were unable to recover it. 00:27:27.806 [2024-07-24 21:52:35.693460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.806 [2024-07-24 21:52:35.693633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.806 [2024-07-24 21:52:35.693652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.806 [2024-07-24 21:52:35.693660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.806 [2024-07-24 21:52:35.693670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.806 [2024-07-24 21:52:35.693686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.806 qpair failed and we were unable to recover it. 00:27:27.806 [2024-07-24 21:52:35.703510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.806 [2024-07-24 21:52:35.703646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.806 [2024-07-24 21:52:35.703664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.806 [2024-07-24 21:52:35.703671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.806 [2024-07-24 21:52:35.703678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.806 [2024-07-24 21:52:35.703694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.806 qpair failed and we were unable to recover it. 00:27:27.806 [2024-07-24 21:52:35.713596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.806 [2024-07-24 21:52:35.713736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.806 [2024-07-24 21:52:35.713754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.806 [2024-07-24 21:52:35.713761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.806 [2024-07-24 21:52:35.713767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.806 [2024-07-24 21:52:35.713784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.806 qpair failed and we were unable to recover it. 00:27:27.806 [2024-07-24 21:52:35.723614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.806 [2024-07-24 21:52:35.723750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.806 [2024-07-24 21:52:35.723768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.806 [2024-07-24 21:52:35.723775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.806 [2024-07-24 21:52:35.723781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.806 [2024-07-24 21:52:35.723797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.806 qpair failed and we were unable to recover it. 00:27:27.806 [2024-07-24 21:52:35.733650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.806 [2024-07-24 21:52:35.733785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.806 [2024-07-24 21:52:35.733804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.806 [2024-07-24 21:52:35.733810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.806 [2024-07-24 21:52:35.733816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.806 [2024-07-24 21:52:35.733833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.806 qpair failed and we were unable to recover it. 00:27:27.806 [2024-07-24 21:52:35.743665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.806 [2024-07-24 21:52:35.743807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.806 [2024-07-24 21:52:35.743826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.806 [2024-07-24 21:52:35.743832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.806 [2024-07-24 21:52:35.743839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.806 [2024-07-24 21:52:35.743856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.806 qpair failed and we were unable to recover it. 00:27:27.806 [2024-07-24 21:52:35.753702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.806 [2024-07-24 21:52:35.753835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.806 [2024-07-24 21:52:35.753854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.806 [2024-07-24 21:52:35.753861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.806 [2024-07-24 21:52:35.753867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.806 [2024-07-24 21:52:35.753883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.806 qpair failed and we were unable to recover it. 00:27:27.806 [2024-07-24 21:52:35.763729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.806 [2024-07-24 21:52:35.763880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.806 [2024-07-24 21:52:35.763899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.806 [2024-07-24 21:52:35.763906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.806 [2024-07-24 21:52:35.763912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.806 [2024-07-24 21:52:35.763928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.806 qpair failed and we were unable to recover it. 00:27:27.806 [2024-07-24 21:52:35.773759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.806 [2024-07-24 21:52:35.773899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.806 [2024-07-24 21:52:35.773917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.806 [2024-07-24 21:52:35.773924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.806 [2024-07-24 21:52:35.773930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.806 [2024-07-24 21:52:35.773946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.806 qpair failed and we were unable to recover it. 00:27:27.806 [2024-07-24 21:52:35.783842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.807 [2024-07-24 21:52:35.783984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.807 [2024-07-24 21:52:35.784002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.807 [2024-07-24 21:52:35.784009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.807 [2024-07-24 21:52:35.784018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.807 [2024-07-24 21:52:35.784034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.807 qpair failed and we were unable to recover it. 00:27:27.807 [2024-07-24 21:52:35.793870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.807 [2024-07-24 21:52:35.794009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.807 [2024-07-24 21:52:35.794028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.807 [2024-07-24 21:52:35.794035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.807 [2024-07-24 21:52:35.794041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.807 [2024-07-24 21:52:35.794064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.807 qpair failed and we were unable to recover it. 00:27:27.807 [2024-07-24 21:52:35.803900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.807 [2024-07-24 21:52:35.804040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.807 [2024-07-24 21:52:35.804063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.807 [2024-07-24 21:52:35.804069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.807 [2024-07-24 21:52:35.804076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.807 [2024-07-24 21:52:35.804093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.807 qpair failed and we were unable to recover it. 00:27:27.807 [2024-07-24 21:52:35.813954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.807 [2024-07-24 21:52:35.814112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.807 [2024-07-24 21:52:35.814131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.807 [2024-07-24 21:52:35.814138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.807 [2024-07-24 21:52:35.814143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.807 [2024-07-24 21:52:35.814160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.807 qpair failed and we were unable to recover it. 00:27:27.807 [2024-07-24 21:52:35.823914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.807 [2024-07-24 21:52:35.824091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.807 [2024-07-24 21:52:35.824109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.807 [2024-07-24 21:52:35.824116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.807 [2024-07-24 21:52:35.824122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.807 [2024-07-24 21:52:35.824138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.807 qpair failed and we were unable to recover it. 00:27:27.807 [2024-07-24 21:52:35.833975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.807 [2024-07-24 21:52:35.834117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.807 [2024-07-24 21:52:35.834136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.807 [2024-07-24 21:52:35.834143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.807 [2024-07-24 21:52:35.834148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.807 [2024-07-24 21:52:35.834164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.807 qpair failed and we were unable to recover it. 00:27:27.807 [2024-07-24 21:52:35.843976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.807 [2024-07-24 21:52:35.844130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.807 [2024-07-24 21:52:35.844149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.807 [2024-07-24 21:52:35.844156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.807 [2024-07-24 21:52:35.844162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.807 [2024-07-24 21:52:35.844179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.807 qpair failed and we were unable to recover it. 00:27:27.807 [2024-07-24 21:52:35.854010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.807 [2024-07-24 21:52:35.854149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.807 [2024-07-24 21:52:35.854168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.807 [2024-07-24 21:52:35.854176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.807 [2024-07-24 21:52:35.854183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.807 [2024-07-24 21:52:35.854199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.807 qpair failed and we were unable to recover it. 00:27:27.807 [2024-07-24 21:52:35.863960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.807 [2024-07-24 21:52:35.864105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.807 [2024-07-24 21:52:35.864124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.807 [2024-07-24 21:52:35.864131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.807 [2024-07-24 21:52:35.864137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.807 [2024-07-24 21:52:35.864153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.807 qpair failed and we were unable to recover it. 00:27:27.807 [2024-07-24 21:52:35.874056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.807 [2024-07-24 21:52:35.874192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.807 [2024-07-24 21:52:35.874211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.807 [2024-07-24 21:52:35.874221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.807 [2024-07-24 21:52:35.874227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.807 [2024-07-24 21:52:35.874243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.807 qpair failed and we were unable to recover it. 00:27:27.807 [2024-07-24 21:52:35.884082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.807 [2024-07-24 21:52:35.884221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.807 [2024-07-24 21:52:35.884239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.807 [2024-07-24 21:52:35.884246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.807 [2024-07-24 21:52:35.884252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.807 [2024-07-24 21:52:35.884268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.807 qpair failed and we were unable to recover it. 00:27:27.807 [2024-07-24 21:52:35.894128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.807 [2024-07-24 21:52:35.894266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.807 [2024-07-24 21:52:35.894284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.807 [2024-07-24 21:52:35.894292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.807 [2024-07-24 21:52:35.894298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.807 [2024-07-24 21:52:35.894313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.807 qpair failed and we were unable to recover it. 00:27:27.807 [2024-07-24 21:52:35.904152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.807 [2024-07-24 21:52:35.904289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.807 [2024-07-24 21:52:35.904308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.807 [2024-07-24 21:52:35.904314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.807 [2024-07-24 21:52:35.904320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.807 [2024-07-24 21:52:35.904336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.808 qpair failed and we were unable to recover it. 00:27:27.808 [2024-07-24 21:52:35.914196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.808 [2024-07-24 21:52:35.914334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.808 [2024-07-24 21:52:35.914353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.808 [2024-07-24 21:52:35.914360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.808 [2024-07-24 21:52:35.914365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:27.808 [2024-07-24 21:52:35.914382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:27.808 qpair failed and we were unable to recover it. 00:27:28.069 [2024-07-24 21:52:35.924211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.069 [2024-07-24 21:52:35.924350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.069 [2024-07-24 21:52:35.924369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.069 [2024-07-24 21:52:35.924376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.069 [2024-07-24 21:52:35.924382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.069 [2024-07-24 21:52:35.924398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.069 qpair failed and we were unable to recover it. 00:27:28.069 [2024-07-24 21:52:35.934241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.069 [2024-07-24 21:52:35.934379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.069 [2024-07-24 21:52:35.934398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.069 [2024-07-24 21:52:35.934404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.069 [2024-07-24 21:52:35.934410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.069 [2024-07-24 21:52:35.934426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.069 qpair failed and we were unable to recover it. 00:27:28.069 [2024-07-24 21:52:35.944208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.069 [2024-07-24 21:52:35.944346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.069 [2024-07-24 21:52:35.944365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.069 [2024-07-24 21:52:35.944372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.069 [2024-07-24 21:52:35.944378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.069 [2024-07-24 21:52:35.944394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.069 qpair failed and we were unable to recover it. 00:27:28.069 [2024-07-24 21:52:35.954225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.069 [2024-07-24 21:52:35.954362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.069 [2024-07-24 21:52:35.954382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.069 [2024-07-24 21:52:35.954389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.069 [2024-07-24 21:52:35.954395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.070 [2024-07-24 21:52:35.954411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.070 qpair failed and we were unable to recover it. 00:27:28.070 [2024-07-24 21:52:35.964335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.070 [2024-07-24 21:52:35.964467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.070 [2024-07-24 21:52:35.964485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.070 [2024-07-24 21:52:35.964495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.070 [2024-07-24 21:52:35.964502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.070 [2024-07-24 21:52:35.964518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.070 qpair failed and we were unable to recover it. 00:27:28.070 [2024-07-24 21:52:35.974356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.070 [2024-07-24 21:52:35.974489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.070 [2024-07-24 21:52:35.974508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.070 [2024-07-24 21:52:35.974514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.070 [2024-07-24 21:52:35.974521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.070 [2024-07-24 21:52:35.974537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.070 qpair failed and we were unable to recover it. 00:27:28.070 [2024-07-24 21:52:35.984375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.070 [2024-07-24 21:52:35.984511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.070 [2024-07-24 21:52:35.984529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.070 [2024-07-24 21:52:35.984536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.070 [2024-07-24 21:52:35.984542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.070 [2024-07-24 21:52:35.984558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.070 qpair failed and we were unable to recover it. 00:27:28.070 [2024-07-24 21:52:35.994403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.070 [2024-07-24 21:52:35.994541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.070 [2024-07-24 21:52:35.994559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.070 [2024-07-24 21:52:35.994566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.070 [2024-07-24 21:52:35.994572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.070 [2024-07-24 21:52:35.994588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.070 qpair failed and we were unable to recover it. 00:27:28.070 [2024-07-24 21:52:36.004399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.070 [2024-07-24 21:52:36.004536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.070 [2024-07-24 21:52:36.004554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.070 [2024-07-24 21:52:36.004561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.070 [2024-07-24 21:52:36.004567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.070 [2024-07-24 21:52:36.004583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.070 qpair failed and we were unable to recover it. 00:27:28.070 [2024-07-24 21:52:36.014464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.070 [2024-07-24 21:52:36.014603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.070 [2024-07-24 21:52:36.014621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.070 [2024-07-24 21:52:36.014628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.070 [2024-07-24 21:52:36.014634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.070 [2024-07-24 21:52:36.014650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.070 qpair failed and we were unable to recover it. 00:27:28.070 [2024-07-24 21:52:36.024501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.070 [2024-07-24 21:52:36.024639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.070 [2024-07-24 21:52:36.024657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.070 [2024-07-24 21:52:36.024664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.070 [2024-07-24 21:52:36.024670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.070 [2024-07-24 21:52:36.024686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.070 qpair failed and we were unable to recover it. 00:27:28.070 [2024-07-24 21:52:36.034515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.070 [2024-07-24 21:52:36.034653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.070 [2024-07-24 21:52:36.034671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.070 [2024-07-24 21:52:36.034678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.070 [2024-07-24 21:52:36.034684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.070 [2024-07-24 21:52:36.034701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.070 qpair failed and we were unable to recover it. 00:27:28.070 [2024-07-24 21:52:36.044536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.070 [2024-07-24 21:52:36.044671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.070 [2024-07-24 21:52:36.044690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.070 [2024-07-24 21:52:36.044697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.070 [2024-07-24 21:52:36.044703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.070 [2024-07-24 21:52:36.044719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.070 qpair failed and we were unable to recover it. 00:27:28.070 [2024-07-24 21:52:36.054565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.070 [2024-07-24 21:52:36.054702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.070 [2024-07-24 21:52:36.054721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.070 [2024-07-24 21:52:36.054731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.070 [2024-07-24 21:52:36.054737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.070 [2024-07-24 21:52:36.054753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.070 qpair failed and we were unable to recover it. 00:27:28.070 [2024-07-24 21:52:36.064601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.070 [2024-07-24 21:52:36.064739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.070 [2024-07-24 21:52:36.064758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.070 [2024-07-24 21:52:36.064765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.070 [2024-07-24 21:52:36.064770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.070 [2024-07-24 21:52:36.064787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.070 qpair failed and we were unable to recover it. 00:27:28.070 [2024-07-24 21:52:36.074632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.070 [2024-07-24 21:52:36.074768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.070 [2024-07-24 21:52:36.074786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.070 [2024-07-24 21:52:36.074793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.070 [2024-07-24 21:52:36.074799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.070 [2024-07-24 21:52:36.074815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.070 qpair failed and we were unable to recover it. 00:27:28.070 [2024-07-24 21:52:36.084662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.070 [2024-07-24 21:52:36.084799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.070 [2024-07-24 21:52:36.084818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.070 [2024-07-24 21:52:36.084825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.070 [2024-07-24 21:52:36.084831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.071 [2024-07-24 21:52:36.084847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.071 qpair failed and we were unable to recover it. 00:27:28.071 [2024-07-24 21:52:36.094698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.071 [2024-07-24 21:52:36.094832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.071 [2024-07-24 21:52:36.094851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.071 [2024-07-24 21:52:36.094857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.071 [2024-07-24 21:52:36.094863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.071 [2024-07-24 21:52:36.094879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.071 qpair failed and we were unable to recover it. 00:27:28.071 [2024-07-24 21:52:36.104723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.071 [2024-07-24 21:52:36.104861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.071 [2024-07-24 21:52:36.104880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.071 [2024-07-24 21:52:36.104887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.071 [2024-07-24 21:52:36.104893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.071 [2024-07-24 21:52:36.104909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.071 qpair failed and we were unable to recover it. 00:27:28.071 [2024-07-24 21:52:36.114750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.071 [2024-07-24 21:52:36.114885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.071 [2024-07-24 21:52:36.114904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.071 [2024-07-24 21:52:36.114911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.071 [2024-07-24 21:52:36.114916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.071 [2024-07-24 21:52:36.114933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.071 qpair failed and we were unable to recover it. 00:27:28.071 [2024-07-24 21:52:36.124774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.071 [2024-07-24 21:52:36.124910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.071 [2024-07-24 21:52:36.124928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.071 [2024-07-24 21:52:36.124935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.071 [2024-07-24 21:52:36.124941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.071 [2024-07-24 21:52:36.124957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.071 qpair failed and we were unable to recover it. 00:27:28.071 [2024-07-24 21:52:36.134797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.071 [2024-07-24 21:52:36.134950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.071 [2024-07-24 21:52:36.134969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.071 [2024-07-24 21:52:36.134976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.071 [2024-07-24 21:52:36.134982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.071 [2024-07-24 21:52:36.134999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.071 qpair failed and we were unable to recover it. 00:27:28.071 [2024-07-24 21:52:36.144845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.071 [2024-07-24 21:52:36.144982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.071 [2024-07-24 21:52:36.145006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.071 [2024-07-24 21:52:36.145013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.071 [2024-07-24 21:52:36.145019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.071 [2024-07-24 21:52:36.145036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.071 qpair failed and we were unable to recover it. 00:27:28.071 [2024-07-24 21:52:36.154874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.071 [2024-07-24 21:52:36.155014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.071 [2024-07-24 21:52:36.155032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.071 [2024-07-24 21:52:36.155039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.071 [2024-07-24 21:52:36.155053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.071 [2024-07-24 21:52:36.155069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.071 qpair failed and we were unable to recover it. 00:27:28.071 [2024-07-24 21:52:36.164910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.071 [2024-07-24 21:52:36.165065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.071 [2024-07-24 21:52:36.165084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.071 [2024-07-24 21:52:36.165091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.071 [2024-07-24 21:52:36.165097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.071 [2024-07-24 21:52:36.165114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.071 qpair failed and we were unable to recover it. 00:27:28.071 [2024-07-24 21:52:36.174926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.071 [2024-07-24 21:52:36.175068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.071 [2024-07-24 21:52:36.175087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.071 [2024-07-24 21:52:36.175094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.071 [2024-07-24 21:52:36.175100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.071 [2024-07-24 21:52:36.175116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.071 qpair failed and we were unable to recover it. 00:27:28.333 [2024-07-24 21:52:36.184960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.333 [2024-07-24 21:52:36.185104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.333 [2024-07-24 21:52:36.185123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.333 [2024-07-24 21:52:36.185130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.333 [2024-07-24 21:52:36.185136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.333 [2024-07-24 21:52:36.185153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-07-24 21:52:36.194980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.333 [2024-07-24 21:52:36.195120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.333 [2024-07-24 21:52:36.195139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.333 [2024-07-24 21:52:36.195146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.333 [2024-07-24 21:52:36.195152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.333 [2024-07-24 21:52:36.195168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-07-24 21:52:36.204993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.333 [2024-07-24 21:52:36.205136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.333 [2024-07-24 21:52:36.205155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.333 [2024-07-24 21:52:36.205162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.333 [2024-07-24 21:52:36.205168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.333 [2024-07-24 21:52:36.205184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-07-24 21:52:36.215001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.334 [2024-07-24 21:52:36.215143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.334 [2024-07-24 21:52:36.215162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.334 [2024-07-24 21:52:36.215169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.334 [2024-07-24 21:52:36.215175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.334 [2024-07-24 21:52:36.215192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-07-24 21:52:36.225036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.334 [2024-07-24 21:52:36.225182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.334 [2024-07-24 21:52:36.225201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.334 [2024-07-24 21:52:36.225208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.334 [2024-07-24 21:52:36.225214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.334 [2024-07-24 21:52:36.225230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-07-24 21:52:36.235301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.334 [2024-07-24 21:52:36.235438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.334 [2024-07-24 21:52:36.235460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.334 [2024-07-24 21:52:36.235467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.334 [2024-07-24 21:52:36.235473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.334 [2024-07-24 21:52:36.235489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-07-24 21:52:36.245112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.334 [2024-07-24 21:52:36.245246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.334 [2024-07-24 21:52:36.245264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.334 [2024-07-24 21:52:36.245271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.334 [2024-07-24 21:52:36.245277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.334 [2024-07-24 21:52:36.245293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-07-24 21:52:36.255146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.334 [2024-07-24 21:52:36.255283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.334 [2024-07-24 21:52:36.255301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.334 [2024-07-24 21:52:36.255308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.334 [2024-07-24 21:52:36.255314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.334 [2024-07-24 21:52:36.255331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-07-24 21:52:36.265174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.334 [2024-07-24 21:52:36.265316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.334 [2024-07-24 21:52:36.265335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.334 [2024-07-24 21:52:36.265342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.334 [2024-07-24 21:52:36.265348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.334 [2024-07-24 21:52:36.265365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-07-24 21:52:36.275177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.334 [2024-07-24 21:52:36.275322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.334 [2024-07-24 21:52:36.275340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.334 [2024-07-24 21:52:36.275347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.334 [2024-07-24 21:52:36.275353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.334 [2024-07-24 21:52:36.275373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-07-24 21:52:36.285212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.334 [2024-07-24 21:52:36.285353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.334 [2024-07-24 21:52:36.285372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.334 [2024-07-24 21:52:36.285379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.334 [2024-07-24 21:52:36.285385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.334 [2024-07-24 21:52:36.285404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-07-24 21:52:36.295240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.334 [2024-07-24 21:52:36.295377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.334 [2024-07-24 21:52:36.295395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.334 [2024-07-24 21:52:36.295402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.334 [2024-07-24 21:52:36.295408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.334 [2024-07-24 21:52:36.295424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-07-24 21:52:36.305261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.334 [2024-07-24 21:52:36.305394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.334 [2024-07-24 21:52:36.305412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.334 [2024-07-24 21:52:36.305419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.334 [2024-07-24 21:52:36.305425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.334 [2024-07-24 21:52:36.305442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-07-24 21:52:36.315302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.334 [2024-07-24 21:52:36.315441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.334 [2024-07-24 21:52:36.315459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.334 [2024-07-24 21:52:36.315466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.334 [2024-07-24 21:52:36.315472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.334 [2024-07-24 21:52:36.315488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-07-24 21:52:36.325356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.334 [2024-07-24 21:52:36.325538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.334 [2024-07-24 21:52:36.325560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.334 [2024-07-24 21:52:36.325567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.334 [2024-07-24 21:52:36.325573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.334 [2024-07-24 21:52:36.325589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-07-24 21:52:36.335368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.334 [2024-07-24 21:52:36.335506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.334 [2024-07-24 21:52:36.335526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.334 [2024-07-24 21:52:36.335534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.334 [2024-07-24 21:52:36.335542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.334 [2024-07-24 21:52:36.335560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-07-24 21:52:36.345429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.334 [2024-07-24 21:52:36.345569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.334 [2024-07-24 21:52:36.345588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.335 [2024-07-24 21:52:36.345594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.335 [2024-07-24 21:52:36.345600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.335 [2024-07-24 21:52:36.345617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-07-24 21:52:36.355351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.335 [2024-07-24 21:52:36.355489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.335 [2024-07-24 21:52:36.355508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.335 [2024-07-24 21:52:36.355515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.335 [2024-07-24 21:52:36.355521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.335 [2024-07-24 21:52:36.355537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-07-24 21:52:36.365426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.335 [2024-07-24 21:52:36.365579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.335 [2024-07-24 21:52:36.365597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.335 [2024-07-24 21:52:36.365604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.335 [2024-07-24 21:52:36.365610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.335 [2024-07-24 21:52:36.365631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-07-24 21:52:36.375412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.335 [2024-07-24 21:52:36.375552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.335 [2024-07-24 21:52:36.375571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.335 [2024-07-24 21:52:36.375578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.335 [2024-07-24 21:52:36.375584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.335 [2024-07-24 21:52:36.375600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-07-24 21:52:36.385523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.335 [2024-07-24 21:52:36.385660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.335 [2024-07-24 21:52:36.385678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.335 [2024-07-24 21:52:36.385685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.335 [2024-07-24 21:52:36.385691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.335 [2024-07-24 21:52:36.385707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-07-24 21:52:36.395464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.335 [2024-07-24 21:52:36.395596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.335 [2024-07-24 21:52:36.395614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.335 [2024-07-24 21:52:36.395621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.335 [2024-07-24 21:52:36.395628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.335 [2024-07-24 21:52:36.395644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-07-24 21:52:36.405548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.335 [2024-07-24 21:52:36.405682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.335 [2024-07-24 21:52:36.405700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.335 [2024-07-24 21:52:36.405707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.335 [2024-07-24 21:52:36.405713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.335 [2024-07-24 21:52:36.405730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-07-24 21:52:36.415577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.335 [2024-07-24 21:52:36.415709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.335 [2024-07-24 21:52:36.415731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.335 [2024-07-24 21:52:36.415738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.335 [2024-07-24 21:52:36.415744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.335 [2024-07-24 21:52:36.415761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-07-24 21:52:36.425540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.335 [2024-07-24 21:52:36.425680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.335 [2024-07-24 21:52:36.425699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.335 [2024-07-24 21:52:36.425706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.335 [2024-07-24 21:52:36.425712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.335 [2024-07-24 21:52:36.425728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-07-24 21:52:36.435673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.335 [2024-07-24 21:52:36.435815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.335 [2024-07-24 21:52:36.435833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.335 [2024-07-24 21:52:36.435840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.335 [2024-07-24 21:52:36.435847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.335 [2024-07-24 21:52:36.435863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-07-24 21:52:36.445702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.335 [2024-07-24 21:52:36.445846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.335 [2024-07-24 21:52:36.445864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.335 [2024-07-24 21:52:36.445871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.335 [2024-07-24 21:52:36.445877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.335 [2024-07-24 21:52:36.445893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-24 21:52:36.455741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.597 [2024-07-24 21:52:36.455880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.597 [2024-07-24 21:52:36.455899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.597 [2024-07-24 21:52:36.455907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.597 [2024-07-24 21:52:36.455913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.597 [2024-07-24 21:52:36.455933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-24 21:52:36.465741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.597 [2024-07-24 21:52:36.465879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.597 [2024-07-24 21:52:36.465899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.597 [2024-07-24 21:52:36.465907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.597 [2024-07-24 21:52:36.465914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.597 [2024-07-24 21:52:36.465931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-24 21:52:36.475792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.597 [2024-07-24 21:52:36.476114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.597 [2024-07-24 21:52:36.476132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.597 [2024-07-24 21:52:36.476139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.598 [2024-07-24 21:52:36.476145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.598 [2024-07-24 21:52:36.476162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-24 21:52:36.485710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.598 [2024-07-24 21:52:36.486050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.598 [2024-07-24 21:52:36.486068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.598 [2024-07-24 21:52:36.486075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.598 [2024-07-24 21:52:36.486081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.598 [2024-07-24 21:52:36.486097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-24 21:52:36.495777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.598 [2024-07-24 21:52:36.495912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.598 [2024-07-24 21:52:36.495930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.598 [2024-07-24 21:52:36.495937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.598 [2024-07-24 21:52:36.495943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.598 [2024-07-24 21:52:36.495960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-24 21:52:36.506062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.598 [2024-07-24 21:52:36.506203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.598 [2024-07-24 21:52:36.506226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.598 [2024-07-24 21:52:36.506233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.598 [2024-07-24 21:52:36.506239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.598 [2024-07-24 21:52:36.506255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-24 21:52:36.515872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.598 [2024-07-24 21:52:36.516050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.598 [2024-07-24 21:52:36.516069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.598 [2024-07-24 21:52:36.516076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.598 [2024-07-24 21:52:36.516082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.598 [2024-07-24 21:52:36.516099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-24 21:52:36.525849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.598 [2024-07-24 21:52:36.525985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.598 [2024-07-24 21:52:36.526003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.598 [2024-07-24 21:52:36.526010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.598 [2024-07-24 21:52:36.526016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.598 [2024-07-24 21:52:36.526032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-24 21:52:36.535955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.598 [2024-07-24 21:52:36.536098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.598 [2024-07-24 21:52:36.536121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.598 [2024-07-24 21:52:36.536130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.598 [2024-07-24 21:52:36.536136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.598 [2024-07-24 21:52:36.536153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-24 21:52:36.545985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.598 [2024-07-24 21:52:36.546133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.598 [2024-07-24 21:52:36.546152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.598 [2024-07-24 21:52:36.546159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.598 [2024-07-24 21:52:36.546169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.598 [2024-07-24 21:52:36.546186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-24 21:52:36.556001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.598 [2024-07-24 21:52:36.556142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.598 [2024-07-24 21:52:36.556161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.598 [2024-07-24 21:52:36.556168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.598 [2024-07-24 21:52:36.556174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.598 [2024-07-24 21:52:36.556191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-24 21:52:36.566052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.598 [2024-07-24 21:52:36.566188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.598 [2024-07-24 21:52:36.566207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.598 [2024-07-24 21:52:36.566214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.598 [2024-07-24 21:52:36.566220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.598 [2024-07-24 21:52:36.566237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-24 21:52:36.576088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.598 [2024-07-24 21:52:36.576259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.598 [2024-07-24 21:52:36.576278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.598 [2024-07-24 21:52:36.576285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.598 [2024-07-24 21:52:36.576291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.598 [2024-07-24 21:52:36.576308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-24 21:52:36.586076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.598 [2024-07-24 21:52:36.586216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.598 [2024-07-24 21:52:36.586235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.598 [2024-07-24 21:52:36.586242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.598 [2024-07-24 21:52:36.586248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.598 [2024-07-24 21:52:36.586265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-24 21:52:36.596061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.598 [2024-07-24 21:52:36.596200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.598 [2024-07-24 21:52:36.596219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.598 [2024-07-24 21:52:36.596226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.598 [2024-07-24 21:52:36.596232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.598 [2024-07-24 21:52:36.596249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-24 21:52:36.606092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.598 [2024-07-24 21:52:36.606226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.598 [2024-07-24 21:52:36.606245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.598 [2024-07-24 21:52:36.606251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.598 [2024-07-24 21:52:36.606258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.599 [2024-07-24 21:52:36.606274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-24 21:52:36.616115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.599 [2024-07-24 21:52:36.616267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.599 [2024-07-24 21:52:36.616285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.599 [2024-07-24 21:52:36.616292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.599 [2024-07-24 21:52:36.616298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.599 [2024-07-24 21:52:36.616314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-24 21:52:36.626142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.599 [2024-07-24 21:52:36.626477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.599 [2024-07-24 21:52:36.626494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.599 [2024-07-24 21:52:36.626500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.599 [2024-07-24 21:52:36.626506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.599 [2024-07-24 21:52:36.626522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-24 21:52:36.636215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.599 [2024-07-24 21:52:36.636360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.599 [2024-07-24 21:52:36.636379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.599 [2024-07-24 21:52:36.636386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.599 [2024-07-24 21:52:36.636395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.599 [2024-07-24 21:52:36.636412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-24 21:52:36.646243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.599 [2024-07-24 21:52:36.646378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.599 [2024-07-24 21:52:36.646396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.599 [2024-07-24 21:52:36.646403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.599 [2024-07-24 21:52:36.646409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.599 [2024-07-24 21:52:36.646426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-24 21:52:36.656270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.599 [2024-07-24 21:52:36.656408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.599 [2024-07-24 21:52:36.656427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.599 [2024-07-24 21:52:36.656434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.599 [2024-07-24 21:52:36.656439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.599 [2024-07-24 21:52:36.656456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-24 21:52:36.666327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.599 [2024-07-24 21:52:36.666467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.599 [2024-07-24 21:52:36.666486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.599 [2024-07-24 21:52:36.666493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.599 [2024-07-24 21:52:36.666499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.599 [2024-07-24 21:52:36.666515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-24 21:52:36.676385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.599 [2024-07-24 21:52:36.676525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.599 [2024-07-24 21:52:36.676544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.599 [2024-07-24 21:52:36.676551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.599 [2024-07-24 21:52:36.676557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.599 [2024-07-24 21:52:36.676573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-24 21:52:36.686393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.599 [2024-07-24 21:52:36.686527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.599 [2024-07-24 21:52:36.686545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.599 [2024-07-24 21:52:36.686552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.599 [2024-07-24 21:52:36.686558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.599 [2024-07-24 21:52:36.686574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-24 21:52:36.696382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.599 [2024-07-24 21:52:36.696526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.599 [2024-07-24 21:52:36.696544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.599 [2024-07-24 21:52:36.696551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.599 [2024-07-24 21:52:36.696557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.599 [2024-07-24 21:52:36.696574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-24 21:52:36.706343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.599 [2024-07-24 21:52:36.706480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.599 [2024-07-24 21:52:36.706498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.599 [2024-07-24 21:52:36.706505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.599 [2024-07-24 21:52:36.706511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.599 [2024-07-24 21:52:36.706527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.861 [2024-07-24 21:52:36.716460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.861 [2024-07-24 21:52:36.716599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.861 [2024-07-24 21:52:36.716619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.861 [2024-07-24 21:52:36.716626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.861 [2024-07-24 21:52:36.716632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.861 [2024-07-24 21:52:36.716649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.861 qpair failed and we were unable to recover it. 00:27:28.861 [2024-07-24 21:52:36.726456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.861 [2024-07-24 21:52:36.726600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.861 [2024-07-24 21:52:36.726618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.861 [2024-07-24 21:52:36.726625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.861 [2024-07-24 21:52:36.726636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.861 [2024-07-24 21:52:36.726653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.861 qpair failed and we were unable to recover it. 00:27:28.861 [2024-07-24 21:52:36.736682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.861 [2024-07-24 21:52:36.736820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.861 [2024-07-24 21:52:36.736840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.861 [2024-07-24 21:52:36.736847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.861 [2024-07-24 21:52:36.736853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.861 [2024-07-24 21:52:36.736870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.861 qpair failed and we were unable to recover it. 00:27:28.861 [2024-07-24 21:52:36.746545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.861 [2024-07-24 21:52:36.746683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.861 [2024-07-24 21:52:36.746702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.861 [2024-07-24 21:52:36.746709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.861 [2024-07-24 21:52:36.746715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.861 [2024-07-24 21:52:36.746732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.861 qpair failed and we were unable to recover it. 00:27:28.861 [2024-07-24 21:52:36.756585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.861 [2024-07-24 21:52:36.756723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.861 [2024-07-24 21:52:36.756743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.861 [2024-07-24 21:52:36.756750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.861 [2024-07-24 21:52:36.756756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.861 [2024-07-24 21:52:36.756772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.861 qpair failed and we were unable to recover it. 00:27:28.861 [2024-07-24 21:52:36.766608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.861 [2024-07-24 21:52:36.766747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.861 [2024-07-24 21:52:36.766766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.861 [2024-07-24 21:52:36.766773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.861 [2024-07-24 21:52:36.766779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.861 [2024-07-24 21:52:36.766797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.861 qpair failed and we were unable to recover it. 00:27:28.861 [2024-07-24 21:52:36.776563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.861 [2024-07-24 21:52:36.776700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.861 [2024-07-24 21:52:36.776719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.861 [2024-07-24 21:52:36.776726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.861 [2024-07-24 21:52:36.776731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.861 [2024-07-24 21:52:36.776748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.861 qpair failed and we were unable to recover it. 00:27:28.861 [2024-07-24 21:52:36.786673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.861 [2024-07-24 21:52:36.786812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.861 [2024-07-24 21:52:36.786830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.862 [2024-07-24 21:52:36.786838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.862 [2024-07-24 21:52:36.786844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.862 [2024-07-24 21:52:36.786860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.862 qpair failed and we were unable to recover it. 00:27:28.862 [2024-07-24 21:52:36.796699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.862 [2024-07-24 21:52:36.796829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.862 [2024-07-24 21:52:36.796847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.862 [2024-07-24 21:52:36.796854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.862 [2024-07-24 21:52:36.796860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.862 [2024-07-24 21:52:36.796876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.862 qpair failed and we were unable to recover it. 00:27:28.862 [2024-07-24 21:52:36.806724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.862 [2024-07-24 21:52:36.806862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.862 [2024-07-24 21:52:36.806880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.862 [2024-07-24 21:52:36.806887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.862 [2024-07-24 21:52:36.806893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.862 [2024-07-24 21:52:36.806909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.862 qpair failed and we were unable to recover it. 00:27:28.862 [2024-07-24 21:52:36.816770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.862 [2024-07-24 21:52:36.816904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.862 [2024-07-24 21:52:36.816923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.862 [2024-07-24 21:52:36.816933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.862 [2024-07-24 21:52:36.816939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.862 [2024-07-24 21:52:36.816955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.862 qpair failed and we were unable to recover it. 00:27:28.862 [2024-07-24 21:52:36.826800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.862 [2024-07-24 21:52:36.826951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.862 [2024-07-24 21:52:36.826970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.862 [2024-07-24 21:52:36.826977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.862 [2024-07-24 21:52:36.826982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.862 [2024-07-24 21:52:36.826998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.862 qpair failed and we were unable to recover it. 00:27:28.862 [2024-07-24 21:52:36.836808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.862 [2024-07-24 21:52:36.836945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.862 [2024-07-24 21:52:36.836964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.862 [2024-07-24 21:52:36.836971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.862 [2024-07-24 21:52:36.836977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.862 [2024-07-24 21:52:36.836993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.862 qpair failed and we were unable to recover it. 00:27:28.862 [2024-07-24 21:52:36.846848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.862 [2024-07-24 21:52:36.846983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.862 [2024-07-24 21:52:36.847002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.862 [2024-07-24 21:52:36.847009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.862 [2024-07-24 21:52:36.847015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.862 [2024-07-24 21:52:36.847031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.862 qpair failed and we were unable to recover it. 00:27:28.862 [2024-07-24 21:52:36.856873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.862 [2024-07-24 21:52:36.857008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.862 [2024-07-24 21:52:36.857027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.862 [2024-07-24 21:52:36.857033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.862 [2024-07-24 21:52:36.857039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.862 [2024-07-24 21:52:36.857062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.862 qpair failed and we were unable to recover it. 00:27:28.862 [2024-07-24 21:52:36.866901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.862 [2024-07-24 21:52:36.867046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.862 [2024-07-24 21:52:36.867065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.862 [2024-07-24 21:52:36.867072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.862 [2024-07-24 21:52:36.867078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.862 [2024-07-24 21:52:36.867094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.862 qpair failed and we were unable to recover it. 00:27:28.862 [2024-07-24 21:52:36.876940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.862 [2024-07-24 21:52:36.877088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.862 [2024-07-24 21:52:36.877106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.862 [2024-07-24 21:52:36.877113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.862 [2024-07-24 21:52:36.877119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.862 [2024-07-24 21:52:36.877135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.862 qpair failed and we were unable to recover it. 00:27:28.862 [2024-07-24 21:52:36.886896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.862 [2024-07-24 21:52:36.887035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.862 [2024-07-24 21:52:36.887060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.862 [2024-07-24 21:52:36.887067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.862 [2024-07-24 21:52:36.887073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.862 [2024-07-24 21:52:36.887089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.862 qpair failed and we were unable to recover it. 00:27:28.862 [2024-07-24 21:52:36.896998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.862 [2024-07-24 21:52:36.897143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.862 [2024-07-24 21:52:36.897161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.862 [2024-07-24 21:52:36.897168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.862 [2024-07-24 21:52:36.897174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.862 [2024-07-24 21:52:36.897190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.862 qpair failed and we were unable to recover it. 00:27:28.862 [2024-07-24 21:52:36.906985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.862 [2024-07-24 21:52:36.907132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.862 [2024-07-24 21:52:36.907150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.862 [2024-07-24 21:52:36.907164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.862 [2024-07-24 21:52:36.907170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.862 [2024-07-24 21:52:36.907186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.862 qpair failed and we were unable to recover it. 00:27:28.862 [2024-07-24 21:52:36.917053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.862 [2024-07-24 21:52:36.917193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.862 [2024-07-24 21:52:36.917211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.862 [2024-07-24 21:52:36.917218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.862 [2024-07-24 21:52:36.917224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.863 [2024-07-24 21:52:36.917240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.863 qpair failed and we were unable to recover it. 00:27:28.863 [2024-07-24 21:52:36.927082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.863 [2024-07-24 21:52:36.927220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.863 [2024-07-24 21:52:36.927238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.863 [2024-07-24 21:52:36.927244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.863 [2024-07-24 21:52:36.927250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.863 [2024-07-24 21:52:36.927267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.863 qpair failed and we were unable to recover it. 00:27:28.863 [2024-07-24 21:52:36.937119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.863 [2024-07-24 21:52:36.937257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.863 [2024-07-24 21:52:36.937275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.863 [2024-07-24 21:52:36.937282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.863 [2024-07-24 21:52:36.937288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.863 [2024-07-24 21:52:36.937304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.863 qpair failed and we were unable to recover it. 00:27:28.863 [2024-07-24 21:52:36.947139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.863 [2024-07-24 21:52:36.947276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.863 [2024-07-24 21:52:36.947294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.863 [2024-07-24 21:52:36.947301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.863 [2024-07-24 21:52:36.947307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.863 [2024-07-24 21:52:36.947324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.863 qpair failed and we were unable to recover it. 00:27:28.863 [2024-07-24 21:52:36.957166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.863 [2024-07-24 21:52:36.957300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.863 [2024-07-24 21:52:36.957320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.863 [2024-07-24 21:52:36.957329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.863 [2024-07-24 21:52:36.957337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.863 [2024-07-24 21:52:36.957355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.863 qpair failed and we were unable to recover it. 00:27:28.863 [2024-07-24 21:52:36.967196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.863 [2024-07-24 21:52:36.967332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.863 [2024-07-24 21:52:36.967351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.863 [2024-07-24 21:52:36.967358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.863 [2024-07-24 21:52:36.967364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:28.863 [2024-07-24 21:52:36.967381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.863 qpair failed and we were unable to recover it. 00:27:29.125 [2024-07-24 21:52:36.977143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.125 [2024-07-24 21:52:36.977290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.125 [2024-07-24 21:52:36.977308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.125 [2024-07-24 21:52:36.977315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.125 [2024-07-24 21:52:36.977322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.125 [2024-07-24 21:52:36.977338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.125 qpair failed and we were unable to recover it. 00:27:29.125 [2024-07-24 21:52:36.987168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.125 [2024-07-24 21:52:36.987308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.125 [2024-07-24 21:52:36.987326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.125 [2024-07-24 21:52:36.987333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.125 [2024-07-24 21:52:36.987339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.125 [2024-07-24 21:52:36.987356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.125 qpair failed and we were unable to recover it. 00:27:29.125 [2024-07-24 21:52:36.997321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.125 [2024-07-24 21:52:36.997456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.125 [2024-07-24 21:52:36.997475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.125 [2024-07-24 21:52:36.997485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.125 [2024-07-24 21:52:36.997492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.125 [2024-07-24 21:52:36.997508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.125 qpair failed and we were unable to recover it. 00:27:29.125 [2024-07-24 21:52:37.007230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.125 [2024-07-24 21:52:37.007369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.125 [2024-07-24 21:52:37.007388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.125 [2024-07-24 21:52:37.007394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.125 [2024-07-24 21:52:37.007401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.125 [2024-07-24 21:52:37.007417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.125 qpair failed and we were unable to recover it. 00:27:29.125 [2024-07-24 21:52:37.017351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.126 [2024-07-24 21:52:37.017487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.126 [2024-07-24 21:52:37.017505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.126 [2024-07-24 21:52:37.017512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.126 [2024-07-24 21:52:37.017518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.126 [2024-07-24 21:52:37.017535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.126 qpair failed and we were unable to recover it. 00:27:29.126 [2024-07-24 21:52:37.027392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.126 [2024-07-24 21:52:37.027530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.126 [2024-07-24 21:52:37.027549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.126 [2024-07-24 21:52:37.027556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.126 [2024-07-24 21:52:37.027562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.126 [2024-07-24 21:52:37.027579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.126 qpair failed and we were unable to recover it. 00:27:29.126 [2024-07-24 21:52:37.037396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.126 [2024-07-24 21:52:37.037529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.126 [2024-07-24 21:52:37.037547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.126 [2024-07-24 21:52:37.037554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.126 [2024-07-24 21:52:37.037560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.126 [2024-07-24 21:52:37.037576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.126 qpair failed and we were unable to recover it. 00:27:29.126 [2024-07-24 21:52:37.047336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.126 [2024-07-24 21:52:37.047477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.126 [2024-07-24 21:52:37.047495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.126 [2024-07-24 21:52:37.047502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.126 [2024-07-24 21:52:37.047508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.126 [2024-07-24 21:52:37.047525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.126 qpair failed and we were unable to recover it. 00:27:29.126 [2024-07-24 21:52:37.057441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.126 [2024-07-24 21:52:37.057581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.126 [2024-07-24 21:52:37.057599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.126 [2024-07-24 21:52:37.057606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.126 [2024-07-24 21:52:37.057613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.126 [2024-07-24 21:52:37.057629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.126 qpair failed and we were unable to recover it. 00:27:29.126 [2024-07-24 21:52:37.067526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.126 [2024-07-24 21:52:37.067688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.126 [2024-07-24 21:52:37.067706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.126 [2024-07-24 21:52:37.067713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.126 [2024-07-24 21:52:37.067719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.126 [2024-07-24 21:52:37.067736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.126 qpair failed and we were unable to recover it. 00:27:29.126 [2024-07-24 21:52:37.077511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.126 [2024-07-24 21:52:37.077645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.126 [2024-07-24 21:52:37.077664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.126 [2024-07-24 21:52:37.077671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.126 [2024-07-24 21:52:37.077676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.126 [2024-07-24 21:52:37.077693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.126 qpair failed and we were unable to recover it. 00:27:29.126 [2024-07-24 21:52:37.087528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.126 [2024-07-24 21:52:37.087665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.126 [2024-07-24 21:52:37.087688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.126 [2024-07-24 21:52:37.087695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.126 [2024-07-24 21:52:37.087701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.126 [2024-07-24 21:52:37.087717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.126 qpair failed and we were unable to recover it. 00:27:29.126 [2024-07-24 21:52:37.097558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.126 [2024-07-24 21:52:37.097698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.126 [2024-07-24 21:52:37.097717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.126 [2024-07-24 21:52:37.097724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.126 [2024-07-24 21:52:37.097730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.126 [2024-07-24 21:52:37.097746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.126 qpair failed and we were unable to recover it. 00:27:29.126 [2024-07-24 21:52:37.107603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.126 [2024-07-24 21:52:37.107743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.126 [2024-07-24 21:52:37.107762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.126 [2024-07-24 21:52:37.107768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.126 [2024-07-24 21:52:37.107774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.126 [2024-07-24 21:52:37.107791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.126 qpair failed and we were unable to recover it. 00:27:29.126 [2024-07-24 21:52:37.117629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.126 [2024-07-24 21:52:37.117764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.126 [2024-07-24 21:52:37.117783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.126 [2024-07-24 21:52:37.117790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.126 [2024-07-24 21:52:37.117796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.126 [2024-07-24 21:52:37.117812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.126 qpair failed and we were unable to recover it. 00:27:29.126 [2024-07-24 21:52:37.127687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.126 [2024-07-24 21:52:37.127853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.126 [2024-07-24 21:52:37.127871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.126 [2024-07-24 21:52:37.127878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.126 [2024-07-24 21:52:37.127884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.126 [2024-07-24 21:52:37.127900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.126 qpair failed and we were unable to recover it. 00:27:29.126 [2024-07-24 21:52:37.137669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.126 [2024-07-24 21:52:37.137810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.126 [2024-07-24 21:52:37.137828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.126 [2024-07-24 21:52:37.137835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.126 [2024-07-24 21:52:37.137841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.126 [2024-07-24 21:52:37.137857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.127 qpair failed and we were unable to recover it. 00:27:29.127 [2024-07-24 21:52:37.147679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.127 [2024-07-24 21:52:37.147816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.127 [2024-07-24 21:52:37.147834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.127 [2024-07-24 21:52:37.147841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.127 [2024-07-24 21:52:37.147847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.127 [2024-07-24 21:52:37.147864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.127 qpair failed and we were unable to recover it. 00:27:29.127 [2024-07-24 21:52:37.157763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.127 [2024-07-24 21:52:37.157922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.127 [2024-07-24 21:52:37.157941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.127 [2024-07-24 21:52:37.157948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.127 [2024-07-24 21:52:37.157954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.127 [2024-07-24 21:52:37.157970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.127 qpair failed and we were unable to recover it. 00:27:29.127 [2024-07-24 21:52:37.167735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.127 [2024-07-24 21:52:37.167871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.127 [2024-07-24 21:52:37.167890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.127 [2024-07-24 21:52:37.167897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.127 [2024-07-24 21:52:37.167903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.127 [2024-07-24 21:52:37.167919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.127 qpair failed and we were unable to recover it. 00:27:29.127 [2024-07-24 21:52:37.177799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.127 [2024-07-24 21:52:37.177938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.127 [2024-07-24 21:52:37.177960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.127 [2024-07-24 21:52:37.177967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.127 [2024-07-24 21:52:37.177973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.127 [2024-07-24 21:52:37.177989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.127 qpair failed and we were unable to recover it. 00:27:29.127 [2024-07-24 21:52:37.187819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.127 [2024-07-24 21:52:37.187957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.127 [2024-07-24 21:52:37.187976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.127 [2024-07-24 21:52:37.187983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.127 [2024-07-24 21:52:37.187989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.127 [2024-07-24 21:52:37.188006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.127 qpair failed and we were unable to recover it. 00:27:29.127 [2024-07-24 21:52:37.197840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.127 [2024-07-24 21:52:37.197979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.127 [2024-07-24 21:52:37.197998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.127 [2024-07-24 21:52:37.198004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.127 [2024-07-24 21:52:37.198011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.127 [2024-07-24 21:52:37.198027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.127 qpair failed and we were unable to recover it. 00:27:29.127 [2024-07-24 21:52:37.207885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.127 [2024-07-24 21:52:37.208021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.127 [2024-07-24 21:52:37.208039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.127 [2024-07-24 21:52:37.208051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.127 [2024-07-24 21:52:37.208058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.127 [2024-07-24 21:52:37.208075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.127 qpair failed and we were unable to recover it. 00:27:29.127 [2024-07-24 21:52:37.217923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.127 [2024-07-24 21:52:37.218064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.127 [2024-07-24 21:52:37.218083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.127 [2024-07-24 21:52:37.218089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.127 [2024-07-24 21:52:37.218095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.127 [2024-07-24 21:52:37.218116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.127 qpair failed and we were unable to recover it. 00:27:29.127 [2024-07-24 21:52:37.227949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.127 [2024-07-24 21:52:37.228092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.127 [2024-07-24 21:52:37.228110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.127 [2024-07-24 21:52:37.228117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.127 [2024-07-24 21:52:37.228123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.127 [2024-07-24 21:52:37.228140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.127 qpair failed and we were unable to recover it. 00:27:29.127 [2024-07-24 21:52:37.237986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.127 [2024-07-24 21:52:37.238136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.127 [2024-07-24 21:52:37.238154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.127 [2024-07-24 21:52:37.238161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.127 [2024-07-24 21:52:37.238167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.127 [2024-07-24 21:52:37.238184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.127 qpair failed and we were unable to recover it. 00:27:29.390 [2024-07-24 21:52:37.247988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.390 [2024-07-24 21:52:37.248152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.390 [2024-07-24 21:52:37.248171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.390 [2024-07-24 21:52:37.248178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.390 [2024-07-24 21:52:37.248184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.390 [2024-07-24 21:52:37.248201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-07-24 21:52:37.258260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.390 [2024-07-24 21:52:37.258398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.390 [2024-07-24 21:52:37.258416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.390 [2024-07-24 21:52:37.258423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.390 [2024-07-24 21:52:37.258429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.390 [2024-07-24 21:52:37.258446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-07-24 21:52:37.268066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.390 [2024-07-24 21:52:37.268207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.390 [2024-07-24 21:52:37.268228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.390 [2024-07-24 21:52:37.268235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.390 [2024-07-24 21:52:37.268241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.390 [2024-07-24 21:52:37.268258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-07-24 21:52:37.278087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.390 [2024-07-24 21:52:37.278224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.390 [2024-07-24 21:52:37.278242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.390 [2024-07-24 21:52:37.278249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.390 [2024-07-24 21:52:37.278255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.390 [2024-07-24 21:52:37.278271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-07-24 21:52:37.288172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.390 [2024-07-24 21:52:37.288305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.390 [2024-07-24 21:52:37.288323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.390 [2024-07-24 21:52:37.288330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.390 [2024-07-24 21:52:37.288336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.390 [2024-07-24 21:52:37.288353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-07-24 21:52:37.298206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.390 [2024-07-24 21:52:37.298343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.390 [2024-07-24 21:52:37.298362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.390 [2024-07-24 21:52:37.298368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.390 [2024-07-24 21:52:37.298374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.390 [2024-07-24 21:52:37.298391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-07-24 21:52:37.308179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.390 [2024-07-24 21:52:37.308321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.390 [2024-07-24 21:52:37.308338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.390 [2024-07-24 21:52:37.308345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.390 [2024-07-24 21:52:37.308351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.390 [2024-07-24 21:52:37.308371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-07-24 21:52:37.318228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.390 [2024-07-24 21:52:37.318393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.390 [2024-07-24 21:52:37.318411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.390 [2024-07-24 21:52:37.318418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.390 [2024-07-24 21:52:37.318424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.390 [2024-07-24 21:52:37.318440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.390 qpair failed and we were unable to recover it. 00:27:29.390 [2024-07-24 21:52:37.328245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.390 [2024-07-24 21:52:37.328383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.390 [2024-07-24 21:52:37.328402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.391 [2024-07-24 21:52:37.328408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.391 [2024-07-24 21:52:37.328414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.391 [2024-07-24 21:52:37.328431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-07-24 21:52:37.338274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.391 [2024-07-24 21:52:37.338411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.391 [2024-07-24 21:52:37.338430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.391 [2024-07-24 21:52:37.338437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.391 [2024-07-24 21:52:37.338443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.391 [2024-07-24 21:52:37.338459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-07-24 21:52:37.348307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.391 [2024-07-24 21:52:37.348452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.391 [2024-07-24 21:52:37.348471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.391 [2024-07-24 21:52:37.348478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.391 [2024-07-24 21:52:37.348484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.391 [2024-07-24 21:52:37.348500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-07-24 21:52:37.358325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.391 [2024-07-24 21:52:37.358459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.391 [2024-07-24 21:52:37.358481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.391 [2024-07-24 21:52:37.358488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.391 [2024-07-24 21:52:37.358494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.391 [2024-07-24 21:52:37.358510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-07-24 21:52:37.368332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.391 [2024-07-24 21:52:37.368471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.391 [2024-07-24 21:52:37.368489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.391 [2024-07-24 21:52:37.368496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.391 [2024-07-24 21:52:37.368502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.391 [2024-07-24 21:52:37.368518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-07-24 21:52:37.378395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.391 [2024-07-24 21:52:37.378530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.391 [2024-07-24 21:52:37.378550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.391 [2024-07-24 21:52:37.378557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.391 [2024-07-24 21:52:37.378563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.391 [2024-07-24 21:52:37.378579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-07-24 21:52:37.388410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.391 [2024-07-24 21:52:37.388554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.391 [2024-07-24 21:52:37.388572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.391 [2024-07-24 21:52:37.388579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.391 [2024-07-24 21:52:37.388585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.391 [2024-07-24 21:52:37.388601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-07-24 21:52:37.398445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.391 [2024-07-24 21:52:37.398583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.391 [2024-07-24 21:52:37.398602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.391 [2024-07-24 21:52:37.398609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.391 [2024-07-24 21:52:37.398615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.391 [2024-07-24 21:52:37.398636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-07-24 21:52:37.408463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.391 [2024-07-24 21:52:37.408600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.391 [2024-07-24 21:52:37.408618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.391 [2024-07-24 21:52:37.408625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.391 [2024-07-24 21:52:37.408631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.391 [2024-07-24 21:52:37.408647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-07-24 21:52:37.418499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.391 [2024-07-24 21:52:37.418636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.391 [2024-07-24 21:52:37.418654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.391 [2024-07-24 21:52:37.418661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.391 [2024-07-24 21:52:37.418667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.391 [2024-07-24 21:52:37.418683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-07-24 21:52:37.428534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.391 [2024-07-24 21:52:37.428666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.391 [2024-07-24 21:52:37.428685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.391 [2024-07-24 21:52:37.428692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.391 [2024-07-24 21:52:37.428697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.391 [2024-07-24 21:52:37.428714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-07-24 21:52:37.438576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.391 [2024-07-24 21:52:37.438708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.391 [2024-07-24 21:52:37.438726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.391 [2024-07-24 21:52:37.438733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.391 [2024-07-24 21:52:37.438739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.391 [2024-07-24 21:52:37.438755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-07-24 21:52:37.448589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.391 [2024-07-24 21:52:37.448726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.391 [2024-07-24 21:52:37.448748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.391 [2024-07-24 21:52:37.448755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.391 [2024-07-24 21:52:37.448761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.391 [2024-07-24 21:52:37.448777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.391 qpair failed and we were unable to recover it. 00:27:29.391 [2024-07-24 21:52:37.458647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.391 [2024-07-24 21:52:37.458782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.391 [2024-07-24 21:52:37.458801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.391 [2024-07-24 21:52:37.458808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.392 [2024-07-24 21:52:37.458814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.392 [2024-07-24 21:52:37.458830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-07-24 21:52:37.468650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.392 [2024-07-24 21:52:37.468787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.392 [2024-07-24 21:52:37.468806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.392 [2024-07-24 21:52:37.468814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.392 [2024-07-24 21:52:37.468820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.392 [2024-07-24 21:52:37.468837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-07-24 21:52:37.478705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.392 [2024-07-24 21:52:37.478836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.392 [2024-07-24 21:52:37.478855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.392 [2024-07-24 21:52:37.478862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.392 [2024-07-24 21:52:37.478868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.392 [2024-07-24 21:52:37.478884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-07-24 21:52:37.488724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.392 [2024-07-24 21:52:37.488903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.392 [2024-07-24 21:52:37.488921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.392 [2024-07-24 21:52:37.488928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.392 [2024-07-24 21:52:37.488938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.392 [2024-07-24 21:52:37.488954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.392 [2024-07-24 21:52:37.498666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.392 [2024-07-24 21:52:37.498808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.392 [2024-07-24 21:52:37.498827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.392 [2024-07-24 21:52:37.498834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.392 [2024-07-24 21:52:37.498840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.392 [2024-07-24 21:52:37.498856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.392 qpair failed and we were unable to recover it. 00:27:29.654 [2024-07-24 21:52:37.508766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.654 [2024-07-24 21:52:37.508909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.654 [2024-07-24 21:52:37.508928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.654 [2024-07-24 21:52:37.508935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.654 [2024-07-24 21:52:37.508941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.654 [2024-07-24 21:52:37.508957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-07-24 21:52:37.518793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.654 [2024-07-24 21:52:37.518930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.654 [2024-07-24 21:52:37.518949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.654 [2024-07-24 21:52:37.518955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.654 [2024-07-24 21:52:37.518961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.654 [2024-07-24 21:52:37.518977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-07-24 21:52:37.528824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.654 [2024-07-24 21:52:37.528962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.654 [2024-07-24 21:52:37.528980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.654 [2024-07-24 21:52:37.528986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.654 [2024-07-24 21:52:37.528993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.654 [2024-07-24 21:52:37.529009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-07-24 21:52:37.538830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.654 [2024-07-24 21:52:37.538971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.654 [2024-07-24 21:52:37.538989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.654 [2024-07-24 21:52:37.538996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.654 [2024-07-24 21:52:37.539002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.654 [2024-07-24 21:52:37.539019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-07-24 21:52:37.548881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.655 [2024-07-24 21:52:37.549021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.655 [2024-07-24 21:52:37.549039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.655 [2024-07-24 21:52:37.549052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.655 [2024-07-24 21:52:37.549059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.655 [2024-07-24 21:52:37.549076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-07-24 21:52:37.558906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.655 [2024-07-24 21:52:37.559041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.655 [2024-07-24 21:52:37.559066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.655 [2024-07-24 21:52:37.559073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.655 [2024-07-24 21:52:37.559079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.655 [2024-07-24 21:52:37.559095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-07-24 21:52:37.568940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.655 [2024-07-24 21:52:37.569080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.655 [2024-07-24 21:52:37.569099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.655 [2024-07-24 21:52:37.569105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.655 [2024-07-24 21:52:37.569112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.655 [2024-07-24 21:52:37.569128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-07-24 21:52:37.578984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.655 [2024-07-24 21:52:37.579128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.655 [2024-07-24 21:52:37.579146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.655 [2024-07-24 21:52:37.579153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.655 [2024-07-24 21:52:37.579163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.655 [2024-07-24 21:52:37.579180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-07-24 21:52:37.589003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.655 [2024-07-24 21:52:37.589147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.655 [2024-07-24 21:52:37.589165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.655 [2024-07-24 21:52:37.589172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.655 [2024-07-24 21:52:37.589178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.655 [2024-07-24 21:52:37.589195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-07-24 21:52:37.599028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.655 [2024-07-24 21:52:37.599166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.655 [2024-07-24 21:52:37.599185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.655 [2024-07-24 21:52:37.599191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.655 [2024-07-24 21:52:37.599197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.655 [2024-07-24 21:52:37.599214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-07-24 21:52:37.609063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.655 [2024-07-24 21:52:37.609202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.655 [2024-07-24 21:52:37.609220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.655 [2024-07-24 21:52:37.609227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.655 [2024-07-24 21:52:37.609233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.655 [2024-07-24 21:52:37.609250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-07-24 21:52:37.619089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.655 [2024-07-24 21:52:37.619234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.655 [2024-07-24 21:52:37.619252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.655 [2024-07-24 21:52:37.619259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.655 [2024-07-24 21:52:37.619265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.655 [2024-07-24 21:52:37.619281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-07-24 21:52:37.629092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.655 [2024-07-24 21:52:37.629233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.655 [2024-07-24 21:52:37.629250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.655 [2024-07-24 21:52:37.629257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.655 [2024-07-24 21:52:37.629262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.655 [2024-07-24 21:52:37.629279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-07-24 21:52:37.639150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.655 [2024-07-24 21:52:37.639288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.655 [2024-07-24 21:52:37.639306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.655 [2024-07-24 21:52:37.639313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.655 [2024-07-24 21:52:37.639319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.655 [2024-07-24 21:52:37.639336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-07-24 21:52:37.649171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.655 [2024-07-24 21:52:37.649304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.655 [2024-07-24 21:52:37.649322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.655 [2024-07-24 21:52:37.649329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.655 [2024-07-24 21:52:37.649335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.655 [2024-07-24 21:52:37.649351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-07-24 21:52:37.659214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.655 [2024-07-24 21:52:37.659353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.655 [2024-07-24 21:52:37.659372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.655 [2024-07-24 21:52:37.659379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.655 [2024-07-24 21:52:37.659385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.655 [2024-07-24 21:52:37.659401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-07-24 21:52:37.669234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.655 [2024-07-24 21:52:37.669376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.655 [2024-07-24 21:52:37.669394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.655 [2024-07-24 21:52:37.669401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.655 [2024-07-24 21:52:37.669414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.655 [2024-07-24 21:52:37.669430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-07-24 21:52:37.679287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.655 [2024-07-24 21:52:37.679424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.656 [2024-07-24 21:52:37.679442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.656 [2024-07-24 21:52:37.679449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.656 [2024-07-24 21:52:37.679455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.656 [2024-07-24 21:52:37.679471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-07-24 21:52:37.689294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.656 [2024-07-24 21:52:37.689429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.656 [2024-07-24 21:52:37.689447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.656 [2024-07-24 21:52:37.689454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.656 [2024-07-24 21:52:37.689460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.656 [2024-07-24 21:52:37.689476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-07-24 21:52:37.699522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.656 [2024-07-24 21:52:37.699667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.656 [2024-07-24 21:52:37.699686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.656 [2024-07-24 21:52:37.699693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.656 [2024-07-24 21:52:37.699699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.656 [2024-07-24 21:52:37.699715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-07-24 21:52:37.709367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.656 [2024-07-24 21:52:37.709507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.656 [2024-07-24 21:52:37.709526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.656 [2024-07-24 21:52:37.709532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.656 [2024-07-24 21:52:37.709538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.656 [2024-07-24 21:52:37.709555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-07-24 21:52:37.719380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.656 [2024-07-24 21:52:37.719515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.656 [2024-07-24 21:52:37.719535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.656 [2024-07-24 21:52:37.719543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.656 [2024-07-24 21:52:37.719550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.656 [2024-07-24 21:52:37.719568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-07-24 21:52:37.729354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.656 [2024-07-24 21:52:37.729492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.656 [2024-07-24 21:52:37.729510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.656 [2024-07-24 21:52:37.729517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.656 [2024-07-24 21:52:37.729523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.656 [2024-07-24 21:52:37.729540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-07-24 21:52:37.739450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.656 [2024-07-24 21:52:37.739778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.656 [2024-07-24 21:52:37.739797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.656 [2024-07-24 21:52:37.739804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.656 [2024-07-24 21:52:37.739811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.656 [2024-07-24 21:52:37.739827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-07-24 21:52:37.749499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.656 [2024-07-24 21:52:37.749689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.656 [2024-07-24 21:52:37.749708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.656 [2024-07-24 21:52:37.749715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.656 [2024-07-24 21:52:37.749721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.656 [2024-07-24 21:52:37.749737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-07-24 21:52:37.759485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.656 [2024-07-24 21:52:37.759618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.656 [2024-07-24 21:52:37.759636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.656 [2024-07-24 21:52:37.759647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.656 [2024-07-24 21:52:37.759653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.656 [2024-07-24 21:52:37.759668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.919 [2024-07-24 21:52:37.769513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.919 [2024-07-24 21:52:37.769657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.919 [2024-07-24 21:52:37.769675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.919 [2024-07-24 21:52:37.769682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.919 [2024-07-24 21:52:37.769688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.919 [2024-07-24 21:52:37.769705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.919 qpair failed and we were unable to recover it. 00:27:29.919 [2024-07-24 21:52:37.779496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.919 [2024-07-24 21:52:37.779825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.919 [2024-07-24 21:52:37.779843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.919 [2024-07-24 21:52:37.779850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.919 [2024-07-24 21:52:37.779856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.919 [2024-07-24 21:52:37.779872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.919 qpair failed and we were unable to recover it. 00:27:29.919 [2024-07-24 21:52:37.789621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.919 [2024-07-24 21:52:37.789762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.919 [2024-07-24 21:52:37.789781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.919 [2024-07-24 21:52:37.789788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.919 [2024-07-24 21:52:37.789794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.919 [2024-07-24 21:52:37.789811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.919 qpair failed and we were unable to recover it. 00:27:29.919 [2024-07-24 21:52:37.799533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.919 [2024-07-24 21:52:37.799667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.919 [2024-07-24 21:52:37.799686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.919 [2024-07-24 21:52:37.799693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.919 [2024-07-24 21:52:37.799700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.919 [2024-07-24 21:52:37.799716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.919 qpair failed and we were unable to recover it. 00:27:29.919 [2024-07-24 21:52:37.809575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.919 [2024-07-24 21:52:37.809721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.919 [2024-07-24 21:52:37.809740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.919 [2024-07-24 21:52:37.809747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.919 [2024-07-24 21:52:37.809753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.919 [2024-07-24 21:52:37.809770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.919 qpair failed and we were unable to recover it. 00:27:29.919 [2024-07-24 21:52:37.819607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.919 [2024-07-24 21:52:37.819745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.919 [2024-07-24 21:52:37.819763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.919 [2024-07-24 21:52:37.819770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.919 [2024-07-24 21:52:37.819776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.919 [2024-07-24 21:52:37.819792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.919 qpair failed and we were unable to recover it. 00:27:29.919 [2024-07-24 21:52:37.829650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.919 [2024-07-24 21:52:37.829790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.919 [2024-07-24 21:52:37.829808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.919 [2024-07-24 21:52:37.829815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.919 [2024-07-24 21:52:37.829821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.919 [2024-07-24 21:52:37.829838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.919 qpair failed and we were unable to recover it. 00:27:29.919 [2024-07-24 21:52:37.839763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.919 [2024-07-24 21:52:37.839899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.919 [2024-07-24 21:52:37.839918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.919 [2024-07-24 21:52:37.839925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.919 [2024-07-24 21:52:37.839931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.919 [2024-07-24 21:52:37.839948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.919 qpair failed and we were unable to recover it. 00:27:29.919 [2024-07-24 21:52:37.849791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.919 [2024-07-24 21:52:37.849929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.919 [2024-07-24 21:52:37.849947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.919 [2024-07-24 21:52:37.849958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.919 [2024-07-24 21:52:37.849964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.919 [2024-07-24 21:52:37.849981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.919 qpair failed and we were unable to recover it. 00:27:29.919 [2024-07-24 21:52:37.859799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.919 [2024-07-24 21:52:37.859936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.919 [2024-07-24 21:52:37.859954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.919 [2024-07-24 21:52:37.859961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.919 [2024-07-24 21:52:37.859967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.919 [2024-07-24 21:52:37.859984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.919 qpair failed and we were unable to recover it. 00:27:29.919 [2024-07-24 21:52:37.869822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.919 [2024-07-24 21:52:37.869977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.919 [2024-07-24 21:52:37.869995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.919 [2024-07-24 21:52:37.870002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.919 [2024-07-24 21:52:37.870008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.919 [2024-07-24 21:52:37.870024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.919 qpair failed and we were unable to recover it. 00:27:29.919 [2024-07-24 21:52:37.879853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.919 [2024-07-24 21:52:37.879989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.919 [2024-07-24 21:52:37.880008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.919 [2024-07-24 21:52:37.880014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.919 [2024-07-24 21:52:37.880026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.919 [2024-07-24 21:52:37.880048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.919 qpair failed and we were unable to recover it. 00:27:29.919 [2024-07-24 21:52:37.889879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.919 [2024-07-24 21:52:37.890015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.919 [2024-07-24 21:52:37.890033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.920 [2024-07-24 21:52:37.890040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.920 [2024-07-24 21:52:37.890053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.920 [2024-07-24 21:52:37.890069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.920 qpair failed and we were unable to recover it. 00:27:29.920 [2024-07-24 21:52:37.899923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.920 [2024-07-24 21:52:37.900068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.920 [2024-07-24 21:52:37.900086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.920 [2024-07-24 21:52:37.900093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.920 [2024-07-24 21:52:37.900099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.920 [2024-07-24 21:52:37.900116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.920 qpair failed and we were unable to recover it. 00:27:29.920 [2024-07-24 21:52:37.909852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.920 [2024-07-24 21:52:37.910000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.920 [2024-07-24 21:52:37.910018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.920 [2024-07-24 21:52:37.910025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.920 [2024-07-24 21:52:37.910031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.920 [2024-07-24 21:52:37.910055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.920 qpair failed and we were unable to recover it. 00:27:29.920 [2024-07-24 21:52:37.919942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.920 [2024-07-24 21:52:37.920088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.920 [2024-07-24 21:52:37.920106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.920 [2024-07-24 21:52:37.920114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.920 [2024-07-24 21:52:37.920120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.920 [2024-07-24 21:52:37.920136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.920 qpair failed and we were unable to recover it. 00:27:29.920 [2024-07-24 21:52:37.929979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.920 [2024-07-24 21:52:37.930127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.920 [2024-07-24 21:52:37.930145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.920 [2024-07-24 21:52:37.930152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.920 [2024-07-24 21:52:37.930158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.920 [2024-07-24 21:52:37.930175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.920 qpair failed and we were unable to recover it. 00:27:29.920 [2024-07-24 21:52:37.940024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.920 [2024-07-24 21:52:37.940170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.920 [2024-07-24 21:52:37.940189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.920 [2024-07-24 21:52:37.940199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.920 [2024-07-24 21:52:37.940205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.920 [2024-07-24 21:52:37.940222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.920 qpair failed and we were unable to recover it. 00:27:29.920 [2024-07-24 21:52:37.950052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.920 [2024-07-24 21:52:37.950188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.920 [2024-07-24 21:52:37.950207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.920 [2024-07-24 21:52:37.950214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.920 [2024-07-24 21:52:37.950220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.920 [2024-07-24 21:52:37.950237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.920 qpair failed and we were unable to recover it. 00:27:29.920 [2024-07-24 21:52:37.960086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.920 [2024-07-24 21:52:37.960218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.920 [2024-07-24 21:52:37.960237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.920 [2024-07-24 21:52:37.960245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.920 [2024-07-24 21:52:37.960251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.920 [2024-07-24 21:52:37.960267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.920 qpair failed and we were unable to recover it. 00:27:29.920 [2024-07-24 21:52:37.970101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.920 [2024-07-24 21:52:37.970241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.920 [2024-07-24 21:52:37.970259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.920 [2024-07-24 21:52:37.970266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.920 [2024-07-24 21:52:37.970273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.920 [2024-07-24 21:52:37.970290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.920 qpair failed and we were unable to recover it. 00:27:29.920 [2024-07-24 21:52:37.980178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.920 [2024-07-24 21:52:37.980322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.920 [2024-07-24 21:52:37.980340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.920 [2024-07-24 21:52:37.980347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.920 [2024-07-24 21:52:37.980353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.920 [2024-07-24 21:52:37.980369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.920 qpair failed and we were unable to recover it. 00:27:29.920 [2024-07-24 21:52:37.990167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.920 [2024-07-24 21:52:37.990311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.920 [2024-07-24 21:52:37.990330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.920 [2024-07-24 21:52:37.990337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.920 [2024-07-24 21:52:37.990343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.920 [2024-07-24 21:52:37.990359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.920 qpair failed and we were unable to recover it. 00:27:29.920 [2024-07-24 21:52:38.000176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.920 [2024-07-24 21:52:38.000313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.920 [2024-07-24 21:52:38.000332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.920 [2024-07-24 21:52:38.000339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.920 [2024-07-24 21:52:38.000345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.920 [2024-07-24 21:52:38.000361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.920 qpair failed and we were unable to recover it. 00:27:29.920 [2024-07-24 21:52:38.010204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.920 [2024-07-24 21:52:38.010342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.920 [2024-07-24 21:52:38.010361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.920 [2024-07-24 21:52:38.010368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.920 [2024-07-24 21:52:38.010374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.920 [2024-07-24 21:52:38.010391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.920 qpair failed and we were unable to recover it. 00:27:29.920 [2024-07-24 21:52:38.020175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.920 [2024-07-24 21:52:38.020314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.920 [2024-07-24 21:52:38.020333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.920 [2024-07-24 21:52:38.020340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.921 [2024-07-24 21:52:38.020346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.921 [2024-07-24 21:52:38.020362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.921 qpair failed and we were unable to recover it. 00:27:29.921 [2024-07-24 21:52:38.030184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.921 [2024-07-24 21:52:38.030324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.921 [2024-07-24 21:52:38.030346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.921 [2024-07-24 21:52:38.030353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.921 [2024-07-24 21:52:38.030359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:29.921 [2024-07-24 21:52:38.030375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.921 qpair failed and we were unable to recover it. 00:27:30.182 [2024-07-24 21:52:38.040269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.182 [2024-07-24 21:52:38.040407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.182 [2024-07-24 21:52:38.040425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.182 [2024-07-24 21:52:38.040432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.182 [2024-07-24 21:52:38.040438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.182 [2024-07-24 21:52:38.040454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.182 qpair failed and we were unable to recover it. 00:27:30.182 [2024-07-24 21:52:38.050262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.182 [2024-07-24 21:52:38.050402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.182 [2024-07-24 21:52:38.050421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.182 [2024-07-24 21:52:38.050428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.182 [2024-07-24 21:52:38.050434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.182 [2024-07-24 21:52:38.050450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.182 qpair failed and we were unable to recover it. 00:27:30.182 [2024-07-24 21:52:38.060341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.182 [2024-07-24 21:52:38.060480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.182 [2024-07-24 21:52:38.060498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.182 [2024-07-24 21:52:38.060505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.182 [2024-07-24 21:52:38.060511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.182 [2024-07-24 21:52:38.060528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.182 qpair failed and we were unable to recover it. 00:27:30.182 [2024-07-24 21:52:38.070395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.182 [2024-07-24 21:52:38.070536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.182 [2024-07-24 21:52:38.070554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.182 [2024-07-24 21:52:38.070561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.182 [2024-07-24 21:52:38.070567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.182 [2024-07-24 21:52:38.070584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.182 qpair failed and we were unable to recover it. 00:27:30.183 [2024-07-24 21:52:38.080416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.183 [2024-07-24 21:52:38.080550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.183 [2024-07-24 21:52:38.080569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.183 [2024-07-24 21:52:38.080576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.183 [2024-07-24 21:52:38.080582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.183 [2024-07-24 21:52:38.080597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.183 qpair failed and we were unable to recover it. 00:27:30.183 [2024-07-24 21:52:38.090365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.183 [2024-07-24 21:52:38.090494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.183 [2024-07-24 21:52:38.090512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.183 [2024-07-24 21:52:38.090519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.183 [2024-07-24 21:52:38.090525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.183 [2024-07-24 21:52:38.090542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.183 qpair failed and we were unable to recover it. 00:27:30.183 [2024-07-24 21:52:38.100475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.183 [2024-07-24 21:52:38.100611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.183 [2024-07-24 21:52:38.100630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.183 [2024-07-24 21:52:38.100637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.183 [2024-07-24 21:52:38.100643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.183 [2024-07-24 21:52:38.100659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.183 qpair failed and we were unable to recover it. 00:27:30.183 [2024-07-24 21:52:38.110463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.183 [2024-07-24 21:52:38.110596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.183 [2024-07-24 21:52:38.110614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.183 [2024-07-24 21:52:38.110621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.183 [2024-07-24 21:52:38.110627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.183 [2024-07-24 21:52:38.110643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.183 qpair failed and we were unable to recover it. 00:27:30.183 [2024-07-24 21:52:38.120448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.183 [2024-07-24 21:52:38.120584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.183 [2024-07-24 21:52:38.120606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.183 [2024-07-24 21:52:38.120613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.183 [2024-07-24 21:52:38.120619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.183 [2024-07-24 21:52:38.120636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.183 qpair failed and we were unable to recover it. 00:27:30.183 [2024-07-24 21:52:38.130552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.183 [2024-07-24 21:52:38.130685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.183 [2024-07-24 21:52:38.130703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.183 [2024-07-24 21:52:38.130710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.183 [2024-07-24 21:52:38.130716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.183 [2024-07-24 21:52:38.130732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.183 qpair failed and we were unable to recover it. 00:27:30.183 [2024-07-24 21:52:38.140608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.183 [2024-07-24 21:52:38.140742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.183 [2024-07-24 21:52:38.140762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.183 [2024-07-24 21:52:38.140769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.183 [2024-07-24 21:52:38.140774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.183 [2024-07-24 21:52:38.140791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.183 qpair failed and we were unable to recover it. 00:27:30.183 [2024-07-24 21:52:38.150586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.183 [2024-07-24 21:52:38.150908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.183 [2024-07-24 21:52:38.150926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.183 [2024-07-24 21:52:38.150933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.183 [2024-07-24 21:52:38.150939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.183 [2024-07-24 21:52:38.150954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.183 qpair failed and we were unable to recover it. 00:27:30.183 [2024-07-24 21:52:38.160553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.183 [2024-07-24 21:52:38.160691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.183 [2024-07-24 21:52:38.160709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.183 [2024-07-24 21:52:38.160716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.183 [2024-07-24 21:52:38.160722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.183 [2024-07-24 21:52:38.160741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.183 qpair failed and we were unable to recover it. 00:27:30.183 [2024-07-24 21:52:38.170624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.183 [2024-07-24 21:52:38.170762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.183 [2024-07-24 21:52:38.170780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.183 [2024-07-24 21:52:38.170787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.183 [2024-07-24 21:52:38.170793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.183 [2024-07-24 21:52:38.170809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.183 qpair failed and we were unable to recover it. 00:27:30.183 [2024-07-24 21:52:38.180718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.183 [2024-07-24 21:52:38.180854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.183 [2024-07-24 21:52:38.180872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.183 [2024-07-24 21:52:38.180879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.183 [2024-07-24 21:52:38.180885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.183 [2024-07-24 21:52:38.180901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.183 qpair failed and we were unable to recover it. 00:27:30.183 [2024-07-24 21:52:38.190713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.183 [2024-07-24 21:52:38.190852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.183 [2024-07-24 21:52:38.190869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.183 [2024-07-24 21:52:38.190876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.183 [2024-07-24 21:52:38.190882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.183 [2024-07-24 21:52:38.190898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.183 qpair failed and we were unable to recover it. 00:27:30.183 [2024-07-24 21:52:38.200744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.183 [2024-07-24 21:52:38.200877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.183 [2024-07-24 21:52:38.200896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.183 [2024-07-24 21:52:38.200902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.183 [2024-07-24 21:52:38.200908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.183 [2024-07-24 21:52:38.200924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.183 qpair failed and we were unable to recover it. 00:27:30.183 [2024-07-24 21:52:38.210778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.183 [2024-07-24 21:52:38.210916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.184 [2024-07-24 21:52:38.210938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.184 [2024-07-24 21:52:38.210945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.184 [2024-07-24 21:52:38.210952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.184 [2024-07-24 21:52:38.210968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.184 qpair failed and we were unable to recover it. 00:27:30.184 [2024-07-24 21:52:38.220797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.184 [2024-07-24 21:52:38.220934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.184 [2024-07-24 21:52:38.220952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.184 [2024-07-24 21:52:38.220959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.184 [2024-07-24 21:52:38.220965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.184 [2024-07-24 21:52:38.220981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.184 qpair failed and we were unable to recover it. 00:27:30.184 [2024-07-24 21:52:38.230824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.184 [2024-07-24 21:52:38.231000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.184 [2024-07-24 21:52:38.231018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.184 [2024-07-24 21:52:38.231025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.184 [2024-07-24 21:52:38.231031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.184 [2024-07-24 21:52:38.231053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.184 qpair failed and we were unable to recover it. 00:27:30.184 [2024-07-24 21:52:38.241075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.184 [2024-07-24 21:52:38.241394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.184 [2024-07-24 21:52:38.241412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.184 [2024-07-24 21:52:38.241419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.184 [2024-07-24 21:52:38.241425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.184 [2024-07-24 21:52:38.241441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.184 qpair failed and we were unable to recover it. 00:27:30.184 [2024-07-24 21:52:38.250888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.184 [2024-07-24 21:52:38.251020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.184 [2024-07-24 21:52:38.251038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.184 [2024-07-24 21:52:38.251052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.184 [2024-07-24 21:52:38.251059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.184 [2024-07-24 21:52:38.251079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.184 qpair failed and we were unable to recover it. 00:27:30.184 [2024-07-24 21:52:38.260927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.184 [2024-07-24 21:52:38.261067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.184 [2024-07-24 21:52:38.261085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.184 [2024-07-24 21:52:38.261092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.184 [2024-07-24 21:52:38.261098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.184 [2024-07-24 21:52:38.261115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.184 qpair failed and we were unable to recover it. 00:27:30.184 [2024-07-24 21:52:38.270947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.184 [2024-07-24 21:52:38.271093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.184 [2024-07-24 21:52:38.271111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.184 [2024-07-24 21:52:38.271118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.184 [2024-07-24 21:52:38.271124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.184 [2024-07-24 21:52:38.271141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.184 qpair failed and we were unable to recover it. 00:27:30.184 [2024-07-24 21:52:38.280984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.184 [2024-07-24 21:52:38.281129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.184 [2024-07-24 21:52:38.281147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.184 [2024-07-24 21:52:38.281154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.184 [2024-07-24 21:52:38.281160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.184 [2024-07-24 21:52:38.281177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.184 qpair failed and we were unable to recover it. 00:27:30.184 [2024-07-24 21:52:38.291003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.184 [2024-07-24 21:52:38.291148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.184 [2024-07-24 21:52:38.291166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.184 [2024-07-24 21:52:38.291173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.184 [2024-07-24 21:52:38.291179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.184 [2024-07-24 21:52:38.291195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.184 qpair failed and we were unable to recover it. 00:27:30.445 [2024-07-24 21:52:38.301039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.445 [2024-07-24 21:52:38.301181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.445 [2024-07-24 21:52:38.301203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.445 [2024-07-24 21:52:38.301210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.445 [2024-07-24 21:52:38.301216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.445 [2024-07-24 21:52:38.301232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.445 qpair failed and we were unable to recover it. 00:27:30.445 [2024-07-24 21:52:38.311061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.445 [2024-07-24 21:52:38.311201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.445 [2024-07-24 21:52:38.311219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.445 [2024-07-24 21:52:38.311226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.445 [2024-07-24 21:52:38.311232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.445 [2024-07-24 21:52:38.311248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.445 qpair failed and we were unable to recover it. 00:27:30.445 [2024-07-24 21:52:38.321103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.445 [2024-07-24 21:52:38.321239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.445 [2024-07-24 21:52:38.321258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.445 [2024-07-24 21:52:38.321265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.445 [2024-07-24 21:52:38.321270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.445 [2024-07-24 21:52:38.321287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.445 qpair failed and we were unable to recover it. 00:27:30.445 [2024-07-24 21:52:38.331132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.445 [2024-07-24 21:52:38.331270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.445 [2024-07-24 21:52:38.331288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.445 [2024-07-24 21:52:38.331295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.445 [2024-07-24 21:52:38.331301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.445 [2024-07-24 21:52:38.331318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.445 qpair failed and we were unable to recover it. 00:27:30.445 [2024-07-24 21:52:38.341173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.445 [2024-07-24 21:52:38.341311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.445 [2024-07-24 21:52:38.341330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.445 [2024-07-24 21:52:38.341336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.445 [2024-07-24 21:52:38.341342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.445 [2024-07-24 21:52:38.341365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.445 qpair failed and we were unable to recover it. 00:27:30.445 [2024-07-24 21:52:38.351178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.445 [2024-07-24 21:52:38.351317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.445 [2024-07-24 21:52:38.351336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.445 [2024-07-24 21:52:38.351343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.445 [2024-07-24 21:52:38.351349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.445 [2024-07-24 21:52:38.351365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.445 qpair failed and we were unable to recover it. 00:27:30.445 [2024-07-24 21:52:38.361214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.445 [2024-07-24 21:52:38.361353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.445 [2024-07-24 21:52:38.361371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.445 [2024-07-24 21:52:38.361378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.445 [2024-07-24 21:52:38.361384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.445 [2024-07-24 21:52:38.361400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.445 qpair failed and we were unable to recover it. 00:27:30.445 [2024-07-24 21:52:38.371245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.445 [2024-07-24 21:52:38.371384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.445 [2024-07-24 21:52:38.371402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.445 [2024-07-24 21:52:38.371409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.445 [2024-07-24 21:52:38.371415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.445 [2024-07-24 21:52:38.371431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.445 qpair failed and we were unable to recover it. 00:27:30.445 [2024-07-24 21:52:38.381270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.445 [2024-07-24 21:52:38.381406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.445 [2024-07-24 21:52:38.381424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.445 [2024-07-24 21:52:38.381431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.445 [2024-07-24 21:52:38.381437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.445 [2024-07-24 21:52:38.381453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.445 qpair failed and we were unable to recover it. 00:27:30.445 [2024-07-24 21:52:38.391307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.445 [2024-07-24 21:52:38.391446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.445 [2024-07-24 21:52:38.391468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.445 [2024-07-24 21:52:38.391475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.445 [2024-07-24 21:52:38.391481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.445 [2024-07-24 21:52:38.391497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.445 qpair failed and we were unable to recover it. 00:27:30.445 [2024-07-24 21:52:38.401512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.445 [2024-07-24 21:52:38.401646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.445 [2024-07-24 21:52:38.401665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.445 [2024-07-24 21:52:38.401671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.445 [2024-07-24 21:52:38.401677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.445 [2024-07-24 21:52:38.401694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.445 qpair failed and we were unable to recover it. 00:27:30.445 [2024-07-24 21:52:38.411361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.445 [2024-07-24 21:52:38.411497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.445 [2024-07-24 21:52:38.411516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.445 [2024-07-24 21:52:38.411523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.445 [2024-07-24 21:52:38.411529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.446 [2024-07-24 21:52:38.411546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.446 qpair failed and we were unable to recover it. 00:27:30.446 [2024-07-24 21:52:38.421401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.446 [2024-07-24 21:52:38.421537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.446 [2024-07-24 21:52:38.421555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.446 [2024-07-24 21:52:38.421562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.446 [2024-07-24 21:52:38.421568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.446 [2024-07-24 21:52:38.421585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.446 qpair failed and we were unable to recover it. 00:27:30.446 [2024-07-24 21:52:38.431417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.446 [2024-07-24 21:52:38.431553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.446 [2024-07-24 21:52:38.431571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.446 [2024-07-24 21:52:38.431578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.446 [2024-07-24 21:52:38.431588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.446 [2024-07-24 21:52:38.431605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.446 qpair failed and we were unable to recover it. 00:27:30.446 [2024-07-24 21:52:38.441436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.446 [2024-07-24 21:52:38.441572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.446 [2024-07-24 21:52:38.441591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.446 [2024-07-24 21:52:38.441598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.446 [2024-07-24 21:52:38.441604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.446 [2024-07-24 21:52:38.441620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.446 qpair failed and we were unable to recover it. 00:27:30.446 [2024-07-24 21:52:38.451467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.446 [2024-07-24 21:52:38.451605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.446 [2024-07-24 21:52:38.451624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.446 [2024-07-24 21:52:38.451630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.446 [2024-07-24 21:52:38.451636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.446 [2024-07-24 21:52:38.451652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.446 qpair failed and we were unable to recover it. 00:27:30.446 [2024-07-24 21:52:38.461528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.446 [2024-07-24 21:52:38.461665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.446 [2024-07-24 21:52:38.461684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.446 [2024-07-24 21:52:38.461691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.446 [2024-07-24 21:52:38.461697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.446 [2024-07-24 21:52:38.461713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.446 qpair failed and we were unable to recover it. 00:27:30.446 [2024-07-24 21:52:38.471447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.446 [2024-07-24 21:52:38.471585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.446 [2024-07-24 21:52:38.471605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.446 [2024-07-24 21:52:38.471612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.446 [2024-07-24 21:52:38.471618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.446 [2024-07-24 21:52:38.471635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.446 qpair failed and we were unable to recover it. 00:27:30.446 [2024-07-24 21:52:38.481551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.446 [2024-07-24 21:52:38.481689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.446 [2024-07-24 21:52:38.481708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.446 [2024-07-24 21:52:38.481715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.446 [2024-07-24 21:52:38.481721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.446 [2024-07-24 21:52:38.481737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.446 qpair failed and we were unable to recover it. 00:27:30.446 [2024-07-24 21:52:38.491631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.446 [2024-07-24 21:52:38.491795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.446 [2024-07-24 21:52:38.491814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.446 [2024-07-24 21:52:38.491820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.446 [2024-07-24 21:52:38.491827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.446 [2024-07-24 21:52:38.491843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.446 qpair failed and we were unable to recover it. 00:27:30.446 [2024-07-24 21:52:38.501549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.446 [2024-07-24 21:52:38.501725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.446 [2024-07-24 21:52:38.501744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.446 [2024-07-24 21:52:38.501750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.446 [2024-07-24 21:52:38.501758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.446 [2024-07-24 21:52:38.501774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.446 qpair failed and we were unable to recover it. 00:27:30.446 [2024-07-24 21:52:38.511652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.446 [2024-07-24 21:52:38.511784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.446 [2024-07-24 21:52:38.511802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.446 [2024-07-24 21:52:38.511809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.446 [2024-07-24 21:52:38.511816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.446 [2024-07-24 21:52:38.511832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.446 qpair failed and we were unable to recover it. 00:27:30.446 [2024-07-24 21:52:38.521597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.446 [2024-07-24 21:52:38.521744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.446 [2024-07-24 21:52:38.521762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.446 [2024-07-24 21:52:38.521769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.446 [2024-07-24 21:52:38.521778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.446 [2024-07-24 21:52:38.521794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.446 qpair failed and we were unable to recover it. 00:27:30.446 [2024-07-24 21:52:38.531704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.446 [2024-07-24 21:52:38.531840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.446 [2024-07-24 21:52:38.531859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.446 [2024-07-24 21:52:38.531866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.446 [2024-07-24 21:52:38.531871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.446 [2024-07-24 21:52:38.531888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.446 qpair failed and we were unable to recover it. 00:27:30.446 [2024-07-24 21:52:38.541740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.446 [2024-07-24 21:52:38.541876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.446 [2024-07-24 21:52:38.541894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.446 [2024-07-24 21:52:38.541902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.446 [2024-07-24 21:52:38.541907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.446 [2024-07-24 21:52:38.541924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.446 qpair failed and we were unable to recover it. 00:27:30.446 [2024-07-24 21:52:38.551760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.447 [2024-07-24 21:52:38.551901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.447 [2024-07-24 21:52:38.551920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.447 [2024-07-24 21:52:38.551927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.447 [2024-07-24 21:52:38.551933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.447 [2024-07-24 21:52:38.551949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.447 qpair failed and we were unable to recover it. 00:27:30.708 [2024-07-24 21:52:38.561789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.708 [2024-07-24 21:52:38.561927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.708 [2024-07-24 21:52:38.561946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.708 [2024-07-24 21:52:38.561953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.708 [2024-07-24 21:52:38.561960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.708 [2024-07-24 21:52:38.561976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.708 qpair failed and we were unable to recover it. 00:27:30.708 [2024-07-24 21:52:38.571826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.708 [2024-07-24 21:52:38.571974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.708 [2024-07-24 21:52:38.571993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.708 [2024-07-24 21:52:38.572000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.708 [2024-07-24 21:52:38.572006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.708 [2024-07-24 21:52:38.572023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.708 qpair failed and we were unable to recover it. 00:27:30.708 [2024-07-24 21:52:38.581852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.709 [2024-07-24 21:52:38.581991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.709 [2024-07-24 21:52:38.582009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.709 [2024-07-24 21:52:38.582016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.709 [2024-07-24 21:52:38.582022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.709 [2024-07-24 21:52:38.582038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-07-24 21:52:38.591905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.709 [2024-07-24 21:52:38.592078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.709 [2024-07-24 21:52:38.592097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.709 [2024-07-24 21:52:38.592104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.709 [2024-07-24 21:52:38.592111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.709 [2024-07-24 21:52:38.592128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-07-24 21:52:38.601907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.709 [2024-07-24 21:52:38.602041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.709 [2024-07-24 21:52:38.602065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.709 [2024-07-24 21:52:38.602072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.709 [2024-07-24 21:52:38.602078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.709 [2024-07-24 21:52:38.602095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-07-24 21:52:38.611929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.709 [2024-07-24 21:52:38.612075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.709 [2024-07-24 21:52:38.612093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.709 [2024-07-24 21:52:38.612101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.709 [2024-07-24 21:52:38.612110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.709 [2024-07-24 21:52:38.612127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-07-24 21:52:38.621976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.709 [2024-07-24 21:52:38.622117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.709 [2024-07-24 21:52:38.622136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.709 [2024-07-24 21:52:38.622143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.709 [2024-07-24 21:52:38.622148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.709 [2024-07-24 21:52:38.622165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-07-24 21:52:38.631973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.709 [2024-07-24 21:52:38.632118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.709 [2024-07-24 21:52:38.632136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.709 [2024-07-24 21:52:38.632143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.709 [2024-07-24 21:52:38.632149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.709 [2024-07-24 21:52:38.632166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-07-24 21:52:38.642024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.709 [2024-07-24 21:52:38.642165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.709 [2024-07-24 21:52:38.642184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.709 [2024-07-24 21:52:38.642191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.709 [2024-07-24 21:52:38.642197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.709 [2024-07-24 21:52:38.642214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-07-24 21:52:38.651979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.709 [2024-07-24 21:52:38.652118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.709 [2024-07-24 21:52:38.652137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.709 [2024-07-24 21:52:38.652144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.709 [2024-07-24 21:52:38.652150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.709 [2024-07-24 21:52:38.652166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-07-24 21:52:38.662094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.709 [2024-07-24 21:52:38.662233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.709 [2024-07-24 21:52:38.662251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.709 [2024-07-24 21:52:38.662259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.709 [2024-07-24 21:52:38.662264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.709 [2024-07-24 21:52:38.662281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-07-24 21:52:38.672111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.709 [2024-07-24 21:52:38.672253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.709 [2024-07-24 21:52:38.672272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.709 [2024-07-24 21:52:38.672278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.709 [2024-07-24 21:52:38.672285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.709 [2024-07-24 21:52:38.672301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-07-24 21:52:38.682149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.709 [2024-07-24 21:52:38.682285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.709 [2024-07-24 21:52:38.682304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.709 [2024-07-24 21:52:38.682311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.709 [2024-07-24 21:52:38.682317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.709 [2024-07-24 21:52:38.682333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-07-24 21:52:38.692173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.709 [2024-07-24 21:52:38.692309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.709 [2024-07-24 21:52:38.692327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.709 [2024-07-24 21:52:38.692334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.709 [2024-07-24 21:52:38.692340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.709 [2024-07-24 21:52:38.692357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-07-24 21:52:38.702208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.709 [2024-07-24 21:52:38.702347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.709 [2024-07-24 21:52:38.702366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.709 [2024-07-24 21:52:38.702376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.709 [2024-07-24 21:52:38.702382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.709 [2024-07-24 21:52:38.702398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.709 qpair failed and we were unable to recover it. 00:27:30.709 [2024-07-24 21:52:38.712253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.709 [2024-07-24 21:52:38.712390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.709 [2024-07-24 21:52:38.712408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.710 [2024-07-24 21:52:38.712414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.710 [2024-07-24 21:52:38.712420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.710 [2024-07-24 21:52:38.712436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-07-24 21:52:38.722265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.710 [2024-07-24 21:52:38.722404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.710 [2024-07-24 21:52:38.722422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.710 [2024-07-24 21:52:38.722429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.710 [2024-07-24 21:52:38.722435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.710 [2024-07-24 21:52:38.722451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-07-24 21:52:38.732302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.710 [2024-07-24 21:52:38.732435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.710 [2024-07-24 21:52:38.732454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.710 [2024-07-24 21:52:38.732460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.710 [2024-07-24 21:52:38.732466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.710 [2024-07-24 21:52:38.732483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-07-24 21:52:38.742320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.710 [2024-07-24 21:52:38.742459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.710 [2024-07-24 21:52:38.742478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.710 [2024-07-24 21:52:38.742485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.710 [2024-07-24 21:52:38.742491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.710 [2024-07-24 21:52:38.742507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-07-24 21:52:38.752348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.710 [2024-07-24 21:52:38.752482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.710 [2024-07-24 21:52:38.752500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.710 [2024-07-24 21:52:38.752507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.710 [2024-07-24 21:52:38.752513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa73f30 00:27:30.710 [2024-07-24 21:52:38.752530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-07-24 21:52:38.762392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.710 [2024-07-24 21:52:38.762574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.710 [2024-07-24 21:52:38.762604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.710 [2024-07-24 21:52:38.762616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.710 [2024-07-24 21:52:38.762625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e40000b90 00:27:30.710 [2024-07-24 21:52:38.762651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-07-24 21:52:38.772398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.710 [2024-07-24 21:52:38.772540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.710 [2024-07-24 21:52:38.772560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.710 [2024-07-24 21:52:38.772568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.710 [2024-07-24 21:52:38.772575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e40000b90 00:27:30.710 [2024-07-24 21:52:38.772593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-07-24 21:52:38.772879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81ff0 is same with the state(5) to be set 00:27:30.710 [2024-07-24 21:52:38.782463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.710 [2024-07-24 21:52:38.782799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.710 [2024-07-24 21:52:38.782828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.710 [2024-07-24 21:52:38.782839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.710 [2024-07-24 21:52:38.782848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:30.710 [2024-07-24 21:52:38.782873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-07-24 21:52:38.792462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.710 [2024-07-24 21:52:38.792609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.710 [2024-07-24 21:52:38.792633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.710 [2024-07-24 21:52:38.792641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.710 [2024-07-24 21:52:38.792647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e50000b90 00:27:30.710 [2024-07-24 21:52:38.792665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-07-24 21:52:38.802484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.710 [2024-07-24 21:52:38.802622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.710 [2024-07-24 21:52:38.802645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.710 [2024-07-24 21:52:38.802654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.710 [2024-07-24 21:52:38.802660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e48000b90 00:27:30.710 [2024-07-24 21:52:38.802679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-07-24 21:52:38.812511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.710 [2024-07-24 21:52:38.812649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.710 [2024-07-24 21:52:38.812669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.710 [2024-07-24 21:52:38.812676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.710 [2024-07-24 21:52:38.812683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4e48000b90 00:27:30.710 [2024-07-24 21:52:38.812699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.710 qpair failed and we were unable to recover it. 00:27:30.710 [2024-07-24 21:52:38.812967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa81ff0 (9): Bad file descriptor 00:27:30.710 Initializing NVMe Controllers 00:27:30.710 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:30.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:30.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:30.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:30.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:30.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:30.710 Initialization complete. Launching workers. 00:27:30.710 Starting thread on core 1 00:27:30.710 Starting thread on core 2 00:27:30.710 Starting thread on core 3 00:27:30.710 Starting thread on core 0 00:27:30.710 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:30.970 00:27:30.970 real 0m11.223s 00:27:30.970 user 0m20.594s 00:27:30.970 sys 0m4.385s 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.970 ************************************ 00:27:30.970 END TEST nvmf_target_disconnect_tc2 00:27:30.970 ************************************ 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:30.970 rmmod nvme_tcp 00:27:30.970 rmmod nvme_fabrics 00:27:30.970 rmmod nvme_keyring 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3215493 ']' 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3215493 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3215493 ']' 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3215493 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3215493 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3215493' 00:27:30.970 killing process with pid 3215493 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3215493 00:27:30.970 21:52:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3215493 00:27:31.230 21:52:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:31.230 21:52:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:31.230 21:52:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:31.230 21:52:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.230 21:52:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:31.230 21:52:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.230 21:52:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.230 21:52:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.138 21:52:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:33.138 00:27:33.138 real 0m19.196s 00:27:33.138 user 0m47.379s 00:27:33.138 sys 0m8.793s 00:27:33.138 21:52:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:33.138 21:52:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:33.138 ************************************ 00:27:33.138 END TEST nvmf_target_disconnect 00:27:33.138 ************************************ 00:27:33.139 21:52:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:33.139 00:27:33.139 real 5m47.211s 00:27:33.139 user 10m53.558s 00:27:33.139 sys 1m45.208s 00:27:33.139 21:52:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:33.139 21:52:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.139 ************************************ 00:27:33.139 END TEST nvmf_host 00:27:33.139 ************************************ 00:27:33.399 00:27:33.399 real 21m0.337s 00:27:33.399 user 45m25.681s 00:27:33.399 sys 6m13.400s 00:27:33.399 21:52:41 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:33.399 21:52:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:33.399 ************************************ 00:27:33.399 END TEST nvmf_tcp 00:27:33.399 ************************************ 00:27:33.399 21:52:41 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:27:33.399 21:52:41 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:33.399 21:52:41 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:33.399 21:52:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.399 21:52:41 -- common/autotest_common.sh@10 -- # set +x 00:27:33.399 ************************************ 00:27:33.399 START TEST spdkcli_nvmf_tcp 00:27:33.399 ************************************ 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:33.399 * Looking for test storage... 00:27:33.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.399 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3217025 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3217025 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3217025 ']' 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:33.400 21:52:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:33.660 [2024-07-24 21:52:41.527585] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:27:33.660 [2024-07-24 21:52:41.527636] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217025 ] 00:27:33.660 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.660 [2024-07-24 21:52:41.581307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:33.660 [2024-07-24 21:52:41.662532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.660 [2024-07-24 21:52:41.662535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.229 21:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:34.229 21:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:27:34.229 21:52:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:34.229 21:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:34.229 21:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:34.488 21:52:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:34.488 21:52:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:34.488 21:52:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:34.488 21:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:34.488 21:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:34.488 21:52:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:34.488 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:34.488 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:34.488 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:34.488 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:34.488 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:34.488 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:34.488 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:34.488 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:34.488 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:34.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:34.488 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:34.488 ' 00:27:37.028 [2024-07-24 21:52:44.934179] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.408 [2024-07-24 21:52:46.222516] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:40.990 [2024-07-24 21:52:48.613907] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:42.901 [2024-07-24 21:52:50.672510] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:44.278 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:44.278 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:44.278 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:44.278 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:44.278 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:44.278 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:44.278 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:44.278 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:44.278 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:44.278 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:44.278 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:44.278 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:44.278 21:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:44.278 21:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:44.278 21:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.278 21:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:44.278 21:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:44.278 21:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.278 21:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:44.278 21:52:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:44.847 21:52:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:44.847 21:52:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:44.847 21:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:44.847 21:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:44.847 21:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.847 21:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:44.847 21:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:44.847 21:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.847 21:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:44.847 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:44.847 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:44.847 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:44.847 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:44.847 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:44.847 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:44.847 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:44.847 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:44.847 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:44.847 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:44.847 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:44.847 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:44.847 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:44.847 ' 00:27:50.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:50.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:50.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:50.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:50.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:50.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:50.133 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:50.133 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:50.133 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:50.133 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:50.133 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:50.133 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:50.133 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:50.133 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:50.133 21:52:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:50.133 21:52:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:50.133 21:52:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.133 21:52:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3217025 00:27:50.133 21:52:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3217025 ']' 00:27:50.133 21:52:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3217025 00:27:50.133 21:52:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:27:50.133 21:52:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:50.133 21:52:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3217025 00:27:50.133 21:52:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:50.133 21:52:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:50.133 21:52:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3217025' 00:27:50.133 killing process with pid 3217025 00:27:50.133 21:52:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3217025 00:27:50.133 21:52:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3217025 00:27:50.133 21:52:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:50.133 21:52:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:50.133 21:52:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3217025 ']' 00:27:50.133 21:52:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3217025 00:27:50.133 21:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3217025 ']' 00:27:50.133 21:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3217025 00:27:50.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3217025) - No such process 00:27:50.133 21:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3217025 is not found' 00:27:50.133 Process with pid 3217025 is not found 00:27:50.133 21:52:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:50.133 21:52:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:50.133 21:52:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:50.133 00:27:50.133 real 0m16.688s 00:27:50.133 user 0m35.807s 00:27:50.133 sys 0m0.811s 00:27:50.133 21:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:50.133 21:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.133 ************************************ 00:27:50.133 END TEST spdkcli_nvmf_tcp 00:27:50.133 ************************************ 00:27:50.133 21:52:58 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:50.133 21:52:58 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:50.133 21:52:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:50.133 21:52:58 -- common/autotest_common.sh@10 -- # set +x 00:27:50.133 ************************************ 00:27:50.133 START TEST nvmf_identify_passthru 00:27:50.133 ************************************ 00:27:50.133 21:52:58 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:50.133 * Looking for test storage... 00:27:50.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:50.133 21:52:58 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.133 21:52:58 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.133 21:52:58 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.133 21:52:58 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.133 21:52:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.133 21:52:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.133 21:52:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.133 21:52:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:50.133 21:52:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.133 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:50.133 21:52:58 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.133 21:52:58 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.133 21:52:58 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.133 21:52:58 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.133 21:52:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.134 21:52:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.134 21:52:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.134 21:52:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:50.134 21:52:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.134 21:52:58 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:50.134 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:50.134 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.134 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:50.134 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:50.134 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:50.134 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.134 21:52:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:50.134 21:52:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.134 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:50.134 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:50.134 21:52:58 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:50.134 21:52:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:55.418 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:55.419 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:55.419 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:55.419 Found net devices under 0000:86:00.0: cvl_0_0 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:55.419 Found net devices under 0000:86:00.1: cvl_0_1 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:55.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:55.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:27:55.419 00:27:55.419 --- 10.0.0.2 ping statistics --- 00:27:55.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.419 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:55.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:55.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.441 ms 00:27:55.419 00:27:55.419 --- 10.0.0.1 ping statistics --- 00:27:55.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.419 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:55.419 21:53:03 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:55.419 21:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:55.419 21:53:03 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:55.419 21:53:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:55.419 21:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:55.419 21:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=() 00:27:55.419 21:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # local bdfs 00:27:55.419 21:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:27:55.419 21:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:27:55.419 21:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=() 00:27:55.419 21:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # local bdfs 00:27:55.419 21:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:55.419 21:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:55.419 21:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:27:55.679 21:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:27:55.679 21:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:5e:00.0 00:27:55.679 21:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # echo 0000:5e:00.0 00:27:55.679 21:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:27:55.679 21:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:27:55.679 21:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:55.679 21:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:55.679 21:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:55.679 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.879 21:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:27:59.879 21:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:59.879 21:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:59.879 21:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:59.879 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.083 21:53:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:04.083 21:53:11 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:04.083 21:53:11 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.083 21:53:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:04.083 21:53:11 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:04.083 21:53:11 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:04.083 21:53:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:04.083 21:53:11 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:04.083 21:53:11 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3224054 00:28:04.083 21:53:11 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:04.083 21:53:11 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3224054 00:28:04.083 21:53:11 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3224054 ']' 00:28:04.083 21:53:11 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.083 21:53:11 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:04.083 21:53:11 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.083 21:53:11 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:04.083 21:53:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:04.083 [2024-07-24 21:53:11.900991] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:28:04.083 [2024-07-24 21:53:11.901039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.083 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.083 [2024-07-24 21:53:11.959628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:04.083 [2024-07-24 21:53:12.041744] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.083 [2024-07-24 21:53:12.041780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.083 [2024-07-24 21:53:12.041787] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.083 [2024-07-24 21:53:12.041793] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.083 [2024-07-24 21:53:12.041798] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.083 [2024-07-24 21:53:12.041846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.083 [2024-07-24 21:53:12.041942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.083 [2024-07-24 21:53:12.042030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.083 [2024-07-24 21:53:12.042031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.654 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.654 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:28:04.654 21:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:04.654 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.654 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:04.654 INFO: Log level set to 20 00:28:04.654 INFO: Requests: 00:28:04.654 { 00:28:04.654 "jsonrpc": "2.0", 00:28:04.654 "method": "nvmf_set_config", 00:28:04.654 "id": 1, 00:28:04.654 "params": { 00:28:04.654 "admin_cmd_passthru": { 00:28:04.654 "identify_ctrlr": true 00:28:04.654 } 00:28:04.654 } 00:28:04.654 } 00:28:04.654 00:28:04.654 INFO: response: 00:28:04.654 { 00:28:04.654 "jsonrpc": "2.0", 00:28:04.654 "id": 1, 00:28:04.654 "result": true 00:28:04.654 } 00:28:04.654 00:28:04.654 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.654 21:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:04.654 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.654 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:04.654 INFO: Setting log level to 20 00:28:04.654 INFO: Setting log level to 20 00:28:04.654 INFO: Log level set to 20 00:28:04.654 INFO: Log level set to 20 00:28:04.654 INFO: Requests: 00:28:04.654 { 00:28:04.654 "jsonrpc": "2.0", 00:28:04.654 "method": "framework_start_init", 00:28:04.654 "id": 1 00:28:04.654 } 00:28:04.654 00:28:04.654 INFO: Requests: 00:28:04.654 { 00:28:04.654 "jsonrpc": "2.0", 00:28:04.654 "method": "framework_start_init", 00:28:04.654 "id": 1 00:28:04.654 } 00:28:04.654 00:28:04.915 [2024-07-24 21:53:12.825502] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:04.915 INFO: response: 00:28:04.915 { 00:28:04.915 "jsonrpc": "2.0", 00:28:04.915 "id": 1, 00:28:04.915 "result": true 00:28:04.915 } 00:28:04.915 00:28:04.915 INFO: response: 00:28:04.915 { 00:28:04.915 "jsonrpc": "2.0", 00:28:04.915 "id": 1, 00:28:04.915 "result": true 00:28:04.915 } 00:28:04.915 00:28:04.915 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.915 21:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:04.915 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.915 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:04.915 INFO: Setting log level to 40 00:28:04.915 INFO: Setting log level to 40 00:28:04.915 INFO: Setting log level to 40 00:28:04.915 [2024-07-24 21:53:12.838956] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.915 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.915 21:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:04.915 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.915 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:04.915 21:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:28:04.915 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.915 21:53:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:08.215 Nvme0n1 00:28:08.215 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.215 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:08.215 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.215 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:08.215 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.215 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:08.215 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.215 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:08.216 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.216 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.216 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:08.216 [2024-07-24 21:53:15.735880] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.216 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:08.216 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.216 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:08.216 [ 00:28:08.216 { 00:28:08.216 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:08.216 "subtype": "Discovery", 00:28:08.216 "listen_addresses": [], 00:28:08.216 "allow_any_host": true, 00:28:08.216 "hosts": [] 00:28:08.216 }, 00:28:08.216 { 00:28:08.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.216 "subtype": "NVMe", 00:28:08.216 "listen_addresses": [ 00:28:08.216 { 00:28:08.216 "trtype": "TCP", 00:28:08.216 "adrfam": "IPv4", 00:28:08.216 "traddr": "10.0.0.2", 00:28:08.216 "trsvcid": "4420" 00:28:08.216 } 00:28:08.216 ], 00:28:08.216 "allow_any_host": true, 00:28:08.216 "hosts": [], 00:28:08.216 "serial_number": "SPDK00000000000001", 00:28:08.216 "model_number": "SPDK bdev Controller", 00:28:08.216 "max_namespaces": 1, 00:28:08.216 "min_cntlid": 1, 00:28:08.216 "max_cntlid": 65519, 00:28:08.216 "namespaces": [ 00:28:08.216 { 00:28:08.216 "nsid": 1, 00:28:08.216 "bdev_name": "Nvme0n1", 00:28:08.216 "name": "Nvme0n1", 00:28:08.216 "nguid": "4CA2667ED6E24B9CA2EDAE7BCED9EF6D", 00:28:08.216 "uuid": "4ca2667e-d6e2-4b9c-a2ed-ae7bced9ef6d" 00:28:08.216 } 00:28:08.216 ] 00:28:08.216 } 00:28:08.216 ] 00:28:08.216 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:08.216 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:08.216 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:08.216 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.216 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:08.216 21:53:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:08.216 21:53:15 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:08.216 21:53:15 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:08.216 21:53:15 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:08.216 21:53:16 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:08.216 21:53:16 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:08.216 21:53:16 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:08.216 21:53:16 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:08.216 rmmod nvme_tcp 00:28:08.216 rmmod nvme_fabrics 00:28:08.216 rmmod nvme_keyring 00:28:08.216 21:53:16 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:08.216 21:53:16 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:08.216 21:53:16 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:08.216 21:53:16 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3224054 ']' 00:28:08.216 21:53:16 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3224054 00:28:08.216 21:53:16 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3224054 ']' 00:28:08.216 21:53:16 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3224054 00:28:08.216 21:53:16 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:28:08.216 21:53:16 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:08.216 21:53:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3224054 00:28:08.216 21:53:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:08.216 21:53:16 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:08.216 21:53:16 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3224054' 00:28:08.216 killing process with pid 3224054 00:28:08.216 21:53:16 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3224054 00:28:08.216 21:53:16 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3224054 00:28:09.655 21:53:17 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:09.655 21:53:17 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:09.655 21:53:17 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:09.655 21:53:17 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:09.655 21:53:17 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:09.655 21:53:17 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.655 21:53:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:09.655 21:53:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.566 21:53:19 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:11.566 00:28:11.566 real 0m21.525s 00:28:11.566 user 0m29.450s 00:28:11.566 sys 0m4.730s 00:28:11.566 21:53:19 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:11.566 21:53:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:11.566 ************************************ 00:28:11.566 END TEST nvmf_identify_passthru 00:28:11.566 ************************************ 00:28:11.566 21:53:19 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:11.566 21:53:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:11.566 21:53:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:11.566 21:53:19 -- common/autotest_common.sh@10 -- # set +x 00:28:11.826 ************************************ 00:28:11.826 START TEST nvmf_dif 00:28:11.826 ************************************ 00:28:11.826 21:53:19 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:11.826 * Looking for test storage... 00:28:11.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:11.826 21:53:19 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.826 21:53:19 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.826 21:53:19 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.826 21:53:19 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.826 21:53:19 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.826 21:53:19 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.826 21:53:19 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.826 21:53:19 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:11.826 21:53:19 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:11.826 21:53:19 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.827 21:53:19 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.827 21:53:19 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.827 21:53:19 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:11.827 21:53:19 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:11.827 21:53:19 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:11.827 21:53:19 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:11.827 21:53:19 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:11.827 21:53:19 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:11.827 21:53:19 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:11.827 21:53:19 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:11.827 21:53:19 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:11.827 21:53:19 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.827 21:53:19 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:11.827 21:53:19 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:11.827 21:53:19 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:11.827 21:53:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.827 21:53:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:11.827 21:53:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.827 21:53:19 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:11.827 21:53:19 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:11.827 21:53:19 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:11.827 21:53:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:17.110 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:17.110 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:17.110 21:53:24 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:17.111 Found net devices under 0000:86:00.0: cvl_0_0 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:17.111 Found net devices under 0000:86:00.1: cvl_0_1 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.111 21:53:24 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.111 21:53:25 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.111 21:53:25 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.111 21:53:25 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:17.111 21:53:25 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.111 21:53:25 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.111 21:53:25 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.111 21:53:25 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:17.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:28:17.111 00:28:17.111 --- 10.0.0.2 ping statistics --- 00:28:17.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.111 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:17.111 21:53:25 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.382 ms 00:28:17.111 00:28:17.111 --- 10.0.0.1 ping statistics --- 00:28:17.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.111 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:28:17.111 21:53:25 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.111 21:53:25 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:17.111 21:53:25 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:17.111 21:53:25 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:19.647 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:19.647 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:19.647 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:19.907 21:53:27 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.907 21:53:27 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:19.907 21:53:27 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:19.907 21:53:27 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.907 21:53:27 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:19.907 21:53:27 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:19.907 21:53:27 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:19.907 21:53:27 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:19.907 21:53:27 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:19.907 21:53:27 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:19.907 21:53:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:19.907 21:53:27 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3229585 00:28:19.907 21:53:27 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3229585 00:28:19.907 21:53:27 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:19.907 21:53:27 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3229585 ']' 00:28:19.907 21:53:27 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.907 21:53:27 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:19.907 21:53:27 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.907 21:53:27 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:19.907 21:53:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:19.907 [2024-07-24 21:53:27.853872] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:28:19.907 [2024-07-24 21:53:27.853918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.907 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.907 [2024-07-24 21:53:27.912665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.907 [2024-07-24 21:53:27.996686] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.907 [2024-07-24 21:53:27.996720] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.907 [2024-07-24 21:53:27.996727] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.907 [2024-07-24 21:53:27.996733] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.907 [2024-07-24 21:53:27.996738] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.907 [2024-07-24 21:53:27.996755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.845 21:53:28 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:20.845 21:53:28 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:28:20.845 21:53:28 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:20.845 21:53:28 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:20.845 21:53:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:20.845 21:53:28 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.845 21:53:28 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:20.845 21:53:28 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:20.845 21:53:28 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.845 21:53:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:20.845 [2024-07-24 21:53:28.701347] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.845 21:53:28 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.845 21:53:28 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:20.845 21:53:28 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:20.845 21:53:28 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.845 21:53:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:20.845 ************************************ 00:28:20.845 START TEST fio_dif_1_default 00:28:20.845 ************************************ 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:20.845 bdev_null0 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:20.845 [2024-07-24 21:53:28.769635] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.845 { 00:28:20.845 "params": { 00:28:20.845 "name": "Nvme$subsystem", 00:28:20.845 "trtype": "$TEST_TRANSPORT", 00:28:20.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.845 "adrfam": "ipv4", 00:28:20.845 "trsvcid": "$NVMF_PORT", 00:28:20.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.845 "hdgst": ${hdgst:-false}, 00:28:20.845 "ddgst": ${ddgst:-false} 00:28:20.845 }, 00:28:20.845 "method": "bdev_nvme_attach_controller" 00:28:20.845 } 00:28:20.845 EOF 00:28:20.845 )") 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local sanitizers 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # shift 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local asan_lib= 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libasan 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:20.845 "params": { 00:28:20.845 "name": "Nvme0", 00:28:20.845 "trtype": "tcp", 00:28:20.845 "traddr": "10.0.0.2", 00:28:20.845 "adrfam": "ipv4", 00:28:20.845 "trsvcid": "4420", 00:28:20.845 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:20.845 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:20.845 "hdgst": false, 00:28:20.845 "ddgst": false 00:28:20.845 }, 00:28:20.845 "method": "bdev_nvme_attach_controller" 00:28:20.845 }' 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:20.845 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:20.846 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:20.846 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:28:20.846 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.846 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:20.846 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:20.846 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:20.846 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:20.846 21:53:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:21.105 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:21.105 fio-3.35 00:28:21.105 Starting 1 thread 00:28:21.105 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.332 00:28:33.332 filename0: (groupid=0, jobs=1): err= 0: pid=3230105: Wed Jul 24 21:53:39 2024 00:28:33.332 read: IOPS=94, BW=380KiB/s (389kB/s)(3808KiB/10022msec) 00:28:33.332 slat (nsec): min=6062, max=71400, avg=6536.84, stdev=2632.03 00:28:33.332 clat (usec): min=41812, max=45022, avg=42086.88, stdev=348.28 00:28:33.332 lat (usec): min=41818, max=45055, avg=42093.41, stdev=348.79 00:28:33.332 clat percentiles (usec): 00:28:33.332 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:28:33.332 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:28:33.332 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:28:33.332 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:28:33.332 | 99.99th=[44827] 00:28:33.332 bw ( KiB/s): min= 352, max= 384, per=99.75%, avg=379.20, stdev=11.72, samples=20 00:28:33.332 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:28:33.332 lat (msec) : 50=100.00% 00:28:33.332 cpu : usr=94.98%, sys=4.75%, ctx=12, majf=0, minf=264 00:28:33.332 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:33.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.332 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.332 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:33.332 00:28:33.332 Run status group 0 (all jobs): 00:28:33.332 READ: bw=380KiB/s (389kB/s), 380KiB/s-380KiB/s (389kB/s-389kB/s), io=3808KiB (3899kB), run=10022-10022msec 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.332 00:28:33.332 real 0m11.203s 00:28:33.332 user 0m16.214s 00:28:33.332 sys 0m0.750s 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:33.332 21:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:33.332 ************************************ 00:28:33.332 END TEST fio_dif_1_default 00:28:33.332 ************************************ 00:28:33.332 21:53:39 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:33.332 21:53:39 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:33.332 21:53:39 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.332 21:53:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:33.332 ************************************ 00:28:33.332 START TEST fio_dif_1_multi_subsystems 00:28:33.332 ************************************ 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.332 bdev_null0 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.332 [2024-07-24 21:53:40.044070] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.332 bdev_null1 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:33.332 { 00:28:33.332 "params": { 00:28:33.332 "name": "Nvme$subsystem", 00:28:33.332 "trtype": "$TEST_TRANSPORT", 00:28:33.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.332 "adrfam": "ipv4", 00:28:33.332 "trsvcid": "$NVMF_PORT", 00:28:33.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.332 "hdgst": ${hdgst:-false}, 00:28:33.332 "ddgst": ${ddgst:-false} 00:28:33.332 }, 00:28:33.332 "method": "bdev_nvme_attach_controller" 00:28:33.332 } 00:28:33.332 EOF 00:28:33.332 )") 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local sanitizers 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:33.332 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # shift 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local asan_lib= 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libasan 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:33.333 { 00:28:33.333 "params": { 00:28:33.333 "name": "Nvme$subsystem", 00:28:33.333 "trtype": "$TEST_TRANSPORT", 00:28:33.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.333 "adrfam": "ipv4", 00:28:33.333 "trsvcid": "$NVMF_PORT", 00:28:33.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.333 "hdgst": ${hdgst:-false}, 00:28:33.333 "ddgst": ${ddgst:-false} 00:28:33.333 }, 00:28:33.333 "method": "bdev_nvme_attach_controller" 00:28:33.333 } 00:28:33.333 EOF 00:28:33.333 )") 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:33.333 "params": { 00:28:33.333 "name": "Nvme0", 00:28:33.333 "trtype": "tcp", 00:28:33.333 "traddr": "10.0.0.2", 00:28:33.333 "adrfam": "ipv4", 00:28:33.333 "trsvcid": "4420", 00:28:33.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:33.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:33.333 "hdgst": false, 00:28:33.333 "ddgst": false 00:28:33.333 }, 00:28:33.333 "method": "bdev_nvme_attach_controller" 00:28:33.333 },{ 00:28:33.333 "params": { 00:28:33.333 "name": "Nvme1", 00:28:33.333 "trtype": "tcp", 00:28:33.333 "traddr": "10.0.0.2", 00:28:33.333 "adrfam": "ipv4", 00:28:33.333 "trsvcid": "4420", 00:28:33.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:33.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:33.333 "hdgst": false, 00:28:33.333 "ddgst": false 00:28:33.333 }, 00:28:33.333 "method": "bdev_nvme_attach_controller" 00:28:33.333 }' 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:33.333 21:53:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:33.333 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:33.333 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:33.333 fio-3.35 00:28:33.333 Starting 2 threads 00:28:33.333 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.315 00:28:43.315 filename0: (groupid=0, jobs=1): err= 0: pid=3232078: Wed Jul 24 21:53:51 2024 00:28:43.315 read: IOPS=181, BW=725KiB/s (743kB/s)(7280KiB/10038msec) 00:28:43.315 slat (nsec): min=5943, max=67790, avg=7146.97, stdev=2421.38 00:28:43.315 clat (usec): min=1672, max=43818, avg=22040.71, stdev=20227.55 00:28:43.315 lat (usec): min=1678, max=43850, avg=22047.86, stdev=20226.92 00:28:43.315 clat percentiles (usec): 00:28:43.315 | 1.00th=[ 1680], 5.00th=[ 1696], 10.00th=[ 1696], 20.00th=[ 1713], 00:28:43.315 | 30.00th=[ 1795], 40.00th=[ 1844], 50.00th=[41681], 60.00th=[42206], 00:28:43.315 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:28:43.315 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:28:43.315 | 99.99th=[43779] 00:28:43.315 bw ( KiB/s): min= 704, max= 768, per=65.73%, avg=726.40, stdev=29.55, samples=20 00:28:43.315 iops : min= 176, max= 192, avg=181.60, stdev= 7.39, samples=20 00:28:43.315 lat (msec) : 2=49.29%, 4=0.60%, 50=50.11% 00:28:43.315 cpu : usr=97.74%, sys=2.00%, ctx=14, majf=0, minf=174 00:28:43.316 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.316 issued rwts: total=1820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.316 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:43.316 filename1: (groupid=0, jobs=1): err= 0: pid=3232079: Wed Jul 24 21:53:51 2024 00:28:43.316 read: IOPS=94, BW=380KiB/s (389kB/s)(3808KiB/10022msec) 00:28:43.316 slat (nsec): min=5971, max=71308, avg=7800.11, stdev=3299.88 00:28:43.316 clat (usec): min=41793, max=44666, avg=42085.08, stdev=341.80 00:28:43.316 lat (usec): min=41800, max=44695, avg=42092.88, stdev=342.28 00:28:43.316 clat percentiles (usec): 00:28:43.316 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:28:43.316 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:28:43.316 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:28:43.316 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:28:43.316 | 99.99th=[44827] 00:28:43.316 bw ( KiB/s): min= 352, max= 384, per=34.31%, avg=379.20, stdev=11.72, samples=20 00:28:43.316 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:28:43.316 lat (msec) : 50=100.00% 00:28:43.316 cpu : usr=97.76%, sys=1.98%, ctx=14, majf=0, minf=62 00:28:43.316 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.316 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.316 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:43.316 00:28:43.316 Run status group 0 (all jobs): 00:28:43.316 READ: bw=1105KiB/s (1131kB/s), 380KiB/s-725KiB/s (389kB/s-743kB/s), io=10.8MiB (11.4MB), run=10022-10038msec 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.576 00:28:43.576 real 0m11.496s 00:28:43.576 user 0m26.175s 00:28:43.576 sys 0m0.780s 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:43.576 21:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:43.576 ************************************ 00:28:43.576 END TEST fio_dif_1_multi_subsystems 00:28:43.576 ************************************ 00:28:43.576 21:53:51 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:43.576 21:53:51 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:43.576 21:53:51 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.576 21:53:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:43.576 ************************************ 00:28:43.577 START TEST fio_dif_rand_params 00:28:43.577 ************************************ 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.577 bdev_null0 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.577 [2024-07-24 21:53:51.613004] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:43.577 { 00:28:43.577 "params": { 00:28:43.577 "name": "Nvme$subsystem", 00:28:43.577 "trtype": "$TEST_TRANSPORT", 00:28:43.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.577 "adrfam": "ipv4", 00:28:43.577 "trsvcid": "$NVMF_PORT", 00:28:43.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.577 "hdgst": ${hdgst:-false}, 00:28:43.577 "ddgst": ${ddgst:-false} 00:28:43.577 }, 00:28:43.577 "method": "bdev_nvme_attach_controller" 00:28:43.577 } 00:28:43.577 EOF 00:28:43.577 )") 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:43.577 "params": { 00:28:43.577 "name": "Nvme0", 00:28:43.577 "trtype": "tcp", 00:28:43.577 "traddr": "10.0.0.2", 00:28:43.577 "adrfam": "ipv4", 00:28:43.577 "trsvcid": "4420", 00:28:43.577 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:43.577 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:43.577 "hdgst": false, 00:28:43.577 "ddgst": false 00:28:43.577 }, 00:28:43.577 "method": "bdev_nvme_attach_controller" 00:28:43.577 }' 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:43.577 21:53:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:44.143 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:44.143 ... 00:28:44.143 fio-3.35 00:28:44.143 Starting 3 threads 00:28:44.143 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.415 00:28:49.415 filename0: (groupid=0, jobs=1): err= 0: pid=3234044: Wed Jul 24 21:53:57 2024 00:28:49.415 read: IOPS=245, BW=30.7MiB/s (32.2MB/s)(154MiB/5006msec) 00:28:49.415 slat (nsec): min=4270, max=22858, avg=9145.39, stdev=2655.69 00:28:49.415 clat (usec): min=5290, max=56220, avg=12214.79, stdev=11193.10 00:28:49.415 lat (usec): min=5300, max=56226, avg=12223.93, stdev=11193.28 00:28:49.415 clat percentiles (usec): 00:28:49.415 | 1.00th=[ 5473], 5.00th=[ 6063], 10.00th=[ 6325], 20.00th=[ 7242], 00:28:49.415 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9896], 00:28:49.415 | 70.00th=[10552], 80.00th=[11338], 90.00th=[13435], 95.00th=[49021], 00:28:49.415 | 99.00th=[52691], 99.50th=[54789], 99.90th=[55837], 99.95th=[56361], 00:28:49.415 | 99.99th=[56361] 00:28:49.415 bw ( KiB/s): min=24576, max=38144, per=38.05%, avg=31360.00, stdev=5399.99, samples=10 00:28:49.415 iops : min= 192, max= 298, avg=245.00, stdev=42.19, samples=10 00:28:49.415 lat (msec) : 10=62.30%, 20=30.13%, 50=3.99%, 100=3.58% 00:28:49.415 cpu : usr=95.40%, sys=4.06%, ctx=6, majf=0, minf=63 00:28:49.415 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.415 issued rwts: total=1228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.415 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:49.415 filename0: (groupid=0, jobs=1): err= 0: pid=3234045: Wed Jul 24 21:53:57 2024 00:28:49.415 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(135MiB/5031msec) 00:28:49.415 slat (nsec): min=6243, max=22723, avg=9261.21, stdev=2773.88 00:28:49.415 clat (usec): min=5153, max=61375, avg=13955.33, stdev=13674.47 00:28:49.415 lat (usec): min=5161, max=61396, avg=13964.59, stdev=13674.44 00:28:49.415 clat percentiles (usec): 00:28:49.415 | 1.00th=[ 5407], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7504], 00:28:49.415 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10028], 00:28:49.415 | 70.00th=[10814], 80.00th=[12780], 90.00th=[47449], 95.00th=[54264], 00:28:49.415 | 99.00th=[57934], 99.50th=[59507], 99.90th=[61080], 99.95th=[61604], 00:28:49.415 | 99.99th=[61604] 00:28:49.415 bw ( KiB/s): min=16896, max=43776, per=33.46%, avg=27571.20, stdev=7869.24, samples=10 00:28:49.415 iops : min= 132, max= 342, avg=215.40, stdev=61.48, samples=10 00:28:49.415 lat (msec) : 10=60.74%, 20=28.98%, 50=1.11%, 100=9.17% 00:28:49.415 cpu : usr=95.41%, sys=3.96%, ctx=7, majf=0, minf=124 00:28:49.415 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.415 issued rwts: total=1080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.415 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:49.415 filename0: (groupid=0, jobs=1): err= 0: pid=3234046: Wed Jul 24 21:53:57 2024 00:28:49.415 read: IOPS=186, BW=23.3MiB/s (24.4MB/s)(116MiB/5004msec) 00:28:49.415 slat (nsec): min=6251, max=23336, avg=9386.93, stdev=2797.87 00:28:49.415 clat (usec): min=5400, max=96763, avg=16107.17, stdev=16050.77 00:28:49.415 lat (usec): min=5411, max=96776, avg=16116.56, stdev=16050.83 00:28:49.415 clat percentiles (usec): 00:28:49.415 | 1.00th=[ 6194], 5.00th=[ 6718], 10.00th=[ 7373], 20.00th=[ 7832], 00:28:49.415 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9896], 60.00th=[10552], 00:28:49.415 | 70.00th=[11338], 80.00th=[13960], 90.00th=[50594], 95.00th=[52691], 00:28:49.415 | 99.00th=[56361], 99.50th=[93848], 99.90th=[96994], 99.95th=[96994], 00:28:49.415 | 99.99th=[96994] 00:28:49.415 bw ( KiB/s): min=18688, max=33024, per=28.83%, avg=23761.10, stdev=3879.31, samples=10 00:28:49.415 iops : min= 146, max= 258, avg=185.60, stdev=30.33, samples=10 00:28:49.416 lat (msec) : 10=51.02%, 20=33.83%, 50=3.97%, 100=11.17% 00:28:49.416 cpu : usr=95.78%, sys=3.78%, ctx=7, majf=0, minf=70 00:28:49.416 IO depths : 1=3.9%, 2=96.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.416 issued rwts: total=931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.416 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:49.416 00:28:49.416 Run status group 0 (all jobs): 00:28:49.416 READ: bw=80.5MiB/s (84.4MB/s), 23.3MiB/s-30.7MiB/s (24.4MB/s-32.2MB/s), io=405MiB (425MB), run=5004-5031msec 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.676 bdev_null0 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.676 [2024-07-24 21:53:57.714686] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.676 bdev_null1 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.676 bdev_null2 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:49.676 21:53:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:49.676 { 00:28:49.676 "params": { 00:28:49.676 "name": "Nvme$subsystem", 00:28:49.676 "trtype": "$TEST_TRANSPORT", 00:28:49.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.676 "adrfam": "ipv4", 00:28:49.677 "trsvcid": "$NVMF_PORT", 00:28:49.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.677 "hdgst": ${hdgst:-false}, 00:28:49.677 "ddgst": ${ddgst:-false} 00:28:49.677 }, 00:28:49.677 "method": "bdev_nvme_attach_controller" 00:28:49.677 } 00:28:49.677 EOF 00:28:49.677 )") 00:28:49.677 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:28:49.677 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:49.677 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:49.677 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:49.677 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:28:49.677 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:49.677 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:28:49.677 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:28:49.677 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.677 21:53:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:49.677 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:28:49.677 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:49.677 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:49.984 { 00:28:49.984 "params": { 00:28:49.984 "name": "Nvme$subsystem", 00:28:49.984 "trtype": "$TEST_TRANSPORT", 00:28:49.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.984 "adrfam": "ipv4", 00:28:49.984 "trsvcid": "$NVMF_PORT", 00:28:49.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.984 "hdgst": ${hdgst:-false}, 00:28:49.984 "ddgst": ${ddgst:-false} 00:28:49.984 }, 00:28:49.984 "method": "bdev_nvme_attach_controller" 00:28:49.984 } 00:28:49.984 EOF 00:28:49.984 )") 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:49.984 { 00:28:49.984 "params": { 00:28:49.984 "name": "Nvme$subsystem", 00:28:49.984 "trtype": "$TEST_TRANSPORT", 00:28:49.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.984 "adrfam": "ipv4", 00:28:49.984 "trsvcid": "$NVMF_PORT", 00:28:49.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.984 "hdgst": ${hdgst:-false}, 00:28:49.984 "ddgst": ${ddgst:-false} 00:28:49.984 }, 00:28:49.984 "method": "bdev_nvme_attach_controller" 00:28:49.984 } 00:28:49.984 EOF 00:28:49.984 )") 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:49.984 "params": { 00:28:49.984 "name": "Nvme0", 00:28:49.984 "trtype": "tcp", 00:28:49.984 "traddr": "10.0.0.2", 00:28:49.984 "adrfam": "ipv4", 00:28:49.984 "trsvcid": "4420", 00:28:49.984 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:49.984 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:49.984 "hdgst": false, 00:28:49.984 "ddgst": false 00:28:49.984 }, 00:28:49.984 "method": "bdev_nvme_attach_controller" 00:28:49.984 },{ 00:28:49.984 "params": { 00:28:49.984 "name": "Nvme1", 00:28:49.984 "trtype": "tcp", 00:28:49.984 "traddr": "10.0.0.2", 00:28:49.984 "adrfam": "ipv4", 00:28:49.984 "trsvcid": "4420", 00:28:49.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:49.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:49.984 "hdgst": false, 00:28:49.984 "ddgst": false 00:28:49.984 }, 00:28:49.984 "method": "bdev_nvme_attach_controller" 00:28:49.984 },{ 00:28:49.984 "params": { 00:28:49.984 "name": "Nvme2", 00:28:49.984 "trtype": "tcp", 00:28:49.984 "traddr": "10.0.0.2", 00:28:49.984 "adrfam": "ipv4", 00:28:49.984 "trsvcid": "4420", 00:28:49.984 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:49.984 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:49.984 "hdgst": false, 00:28:49.984 "ddgst": false 00:28:49.984 }, 00:28:49.984 "method": "bdev_nvme_attach_controller" 00:28:49.984 }' 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:49.984 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.985 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:49.985 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:28:49.985 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:28:49.985 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:28:49.985 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:28:49.985 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:49.985 21:53:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.243 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:50.243 ... 00:28:50.243 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:50.243 ... 00:28:50.243 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:50.243 ... 00:28:50.243 fio-3.35 00:28:50.243 Starting 24 threads 00:28:50.243 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.444 00:29:02.444 filename0: (groupid=0, jobs=1): err= 0: pid=3235088: Wed Jul 24 21:54:08 2024 00:29:02.444 read: IOPS=619, BW=2476KiB/s (2536kB/s)(24.2MiB/10012msec) 00:29:02.444 slat (nsec): min=6656, max=96400, avg=27477.58, stdev=16603.66 00:29:02.444 clat (usec): min=3180, max=46730, avg=25674.80, stdev=3367.99 00:29:02.444 lat (usec): min=3196, max=46765, avg=25702.27, stdev=3369.10 00:29:02.444 clat percentiles (usec): 00:29:02.444 | 1.00th=[13566], 5.00th=[22414], 10.00th=[23725], 20.00th=[24249], 00:29:02.444 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:29:02.444 | 70.00th=[26084], 80.00th=[26870], 90.00th=[27919], 95.00th=[29754], 00:29:02.444 | 99.00th=[39060], 99.50th=[41681], 99.90th=[45351], 99.95th=[46400], 00:29:02.444 | 99.99th=[46924] 00:29:02.444 bw ( KiB/s): min= 2176, max= 2608, per=4.26%, avg=2475.10, stdev=110.01, samples=20 00:29:02.444 iops : min= 544, max= 652, avg=618.70, stdev=27.45, samples=20 00:29:02.444 lat (msec) : 4=0.16%, 10=0.10%, 20=2.86%, 50=96.89% 00:29:02.444 cpu : usr=96.85%, sys=1.70%, ctx=237, majf=0, minf=51 00:29:02.444 IO depths : 1=1.5%, 2=3.7%, 4=12.5%, 8=70.4%, 16=11.8%, 32=0.0%, >=64=0.0% 00:29:02.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.444 complete : 0=0.0%, 4=91.2%, 8=3.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.444 issued rwts: total=6198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.444 filename0: (groupid=0, jobs=1): err= 0: pid=3235089: Wed Jul 24 21:54:08 2024 00:29:02.444 read: IOPS=577, BW=2308KiB/s (2364kB/s)(22.6MiB/10005msec) 00:29:02.444 slat (usec): min=6, max=805, avg=29.67, stdev=20.21 00:29:02.444 clat (usec): min=5546, max=50581, avg=27574.28, stdev=5173.59 00:29:02.444 lat (usec): min=5558, max=50597, avg=27603.95, stdev=5171.67 00:29:02.444 clat percentiles (usec): 00:29:02.444 | 1.00th=[15270], 5.00th=[22676], 10.00th=[23987], 20.00th=[24511], 00:29:02.444 | 30.00th=[25035], 40.00th=[25560], 50.00th=[25822], 60.00th=[26870], 00:29:02.444 | 70.00th=[27919], 80.00th=[30802], 90.00th=[35390], 95.00th=[38011], 00:29:02.444 | 99.00th=[43779], 99.50th=[45351], 99.90th=[50594], 99.95th=[50594], 00:29:02.444 | 99.99th=[50594] 00:29:02.444 bw ( KiB/s): min= 2048, max= 2440, per=3.95%, avg=2295.89, stdev=118.77, samples=19 00:29:02.444 iops : min= 512, max= 610, avg=573.89, stdev=29.68, samples=19 00:29:02.444 lat (msec) : 10=0.17%, 20=2.74%, 50=96.99%, 100=0.10% 00:29:02.444 cpu : usr=96.27%, sys=1.91%, ctx=200, majf=0, minf=30 00:29:02.444 IO depths : 1=0.2%, 2=0.4%, 4=7.3%, 8=77.7%, 16=14.4%, 32=0.0%, >=64=0.0% 00:29:02.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.444 complete : 0=0.0%, 4=90.1%, 8=6.2%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.444 issued rwts: total=5774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.444 filename0: (groupid=0, jobs=1): err= 0: pid=3235090: Wed Jul 24 21:54:08 2024 00:29:02.444 read: IOPS=620, BW=2481KiB/s (2540kB/s)(24.2MiB/10010msec) 00:29:02.444 slat (usec): min=6, max=105, avg=26.21, stdev=14.68 00:29:02.444 clat (usec): min=9421, max=46356, avg=25600.50, stdev=3158.86 00:29:02.444 lat (usec): min=9432, max=46365, avg=25626.70, stdev=3158.89 00:29:02.444 clat percentiles (usec): 00:29:02.444 | 1.00th=[13042], 5.00th=[22676], 10.00th=[23725], 20.00th=[24249], 00:29:02.444 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[25822], 00:29:02.444 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27919], 95.00th=[29230], 00:29:02.444 | 99.00th=[36963], 99.50th=[39060], 99.90th=[41157], 99.95th=[45876], 00:29:02.444 | 99.99th=[46400] 00:29:02.444 bw ( KiB/s): min= 2304, max= 2592, per=4.26%, avg=2478.89, stdev=77.44, samples=19 00:29:02.444 iops : min= 576, max= 648, avg=619.68, stdev=19.37, samples=19 00:29:02.444 lat (msec) : 10=0.05%, 20=2.63%, 50=97.33% 00:29:02.444 cpu : usr=98.32%, sys=1.04%, ctx=74, majf=0, minf=29 00:29:02.444 IO depths : 1=3.3%, 2=6.9%, 4=20.2%, 8=60.1%, 16=9.5%, 32=0.0%, >=64=0.0% 00:29:02.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.444 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.444 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.444 filename0: (groupid=0, jobs=1): err= 0: pid=3235091: Wed Jul 24 21:54:08 2024 00:29:02.444 read: IOPS=596, BW=2386KiB/s (2443kB/s)(23.3MiB/10011msec) 00:29:02.444 slat (usec): min=6, max=109, avg=34.29, stdev=20.65 00:29:02.444 clat (usec): min=8400, max=49563, avg=26608.39, stdev=4769.27 00:29:02.444 lat (usec): min=8452, max=49581, avg=26642.68, stdev=4767.95 00:29:02.444 clat percentiles (usec): 00:29:02.444 | 1.00th=[13960], 5.00th=[21365], 10.00th=[23462], 20.00th=[24249], 00:29:02.444 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:29:02.444 | 70.00th=[26870], 80.00th=[28181], 90.00th=[32113], 95.00th=[37487], 00:29:02.444 | 99.00th=[42730], 99.50th=[47449], 99.90th=[49546], 99.95th=[49546], 00:29:02.444 | 99.99th=[49546] 00:29:02.444 bw ( KiB/s): min= 2208, max= 2560, per=4.10%, avg=2382.00, stdev=132.56, samples=19 00:29:02.444 iops : min= 552, max= 640, avg=595.47, stdev=33.10, samples=19 00:29:02.444 lat (msec) : 10=0.08%, 20=3.68%, 50=96.23% 00:29:02.444 cpu : usr=98.65%, sys=0.78%, ctx=43, majf=0, minf=34 00:29:02.444 IO depths : 1=0.6%, 2=1.4%, 4=13.6%, 8=71.0%, 16=13.3%, 32=0.0%, >=64=0.0% 00:29:02.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.444 complete : 0=0.0%, 4=92.3%, 8=3.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.444 issued rwts: total=5972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.444 filename0: (groupid=0, jobs=1): err= 0: pid=3235092: Wed Jul 24 21:54:08 2024 00:29:02.444 read: IOPS=626, BW=2507KiB/s (2567kB/s)(24.5MiB/10023msec) 00:29:02.444 slat (usec): min=6, max=101, avg=31.26, stdev=19.74 00:29:02.444 clat (usec): min=11343, max=47377, avg=25305.09, stdev=3377.40 00:29:02.444 lat (usec): min=11352, max=47422, avg=25336.35, stdev=3379.38 00:29:02.444 clat percentiles (usec): 00:29:02.444 | 1.00th=[14615], 5.00th=[19268], 10.00th=[23200], 20.00th=[24249], 00:29:02.444 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:29:02.444 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27657], 95.00th=[29492], 00:29:02.444 | 99.00th=[38536], 99.50th=[41681], 99.90th=[44827], 99.95th=[47449], 00:29:02.444 | 99.99th=[47449] 00:29:02.444 bw ( KiB/s): min= 2304, max= 2877, per=4.32%, avg=2510.40, stdev=125.10, samples=20 00:29:02.444 iops : min= 576, max= 719, avg=627.55, stdev=31.26, samples=20 00:29:02.444 lat (msec) : 20=5.56%, 50=94.44% 00:29:02.444 cpu : usr=98.75%, sys=0.82%, ctx=33, majf=0, minf=48 00:29:02.444 IO depths : 1=1.4%, 2=3.0%, 4=11.4%, 8=71.7%, 16=12.5%, 32=0.0%, >=64=0.0% 00:29:02.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.444 complete : 0=0.0%, 4=91.1%, 8=4.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.444 issued rwts: total=6282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.444 filename0: (groupid=0, jobs=1): err= 0: pid=3235093: Wed Jul 24 21:54:08 2024 00:29:02.444 read: IOPS=575, BW=2300KiB/s (2355kB/s)(22.5MiB/10010msec) 00:29:02.444 slat (nsec): min=6417, max=82028, avg=26807.42, stdev=15744.32 00:29:02.444 clat (usec): min=7934, max=51727, avg=27672.54, stdev=5239.60 00:29:02.444 lat (usec): min=7950, max=51745, avg=27699.35, stdev=5239.72 00:29:02.444 clat percentiles (usec): 00:29:02.444 | 1.00th=[16909], 5.00th=[23200], 10.00th=[23987], 20.00th=[24773], 00:29:02.444 | 30.00th=[25035], 40.00th=[25560], 50.00th=[25822], 60.00th=[26608], 00:29:02.444 | 70.00th=[27657], 80.00th=[30016], 90.00th=[35390], 95.00th=[39060], 00:29:02.444 | 99.00th=[46400], 99.50th=[47973], 99.90th=[51643], 99.95th=[51643], 00:29:02.444 | 99.99th=[51643] 00:29:02.444 bw ( KiB/s): min= 2000, max= 2560, per=3.96%, avg=2303.74, stdev=122.68, samples=19 00:29:02.444 iops : min= 500, max= 640, avg=575.89, stdev=30.65, samples=19 00:29:02.444 lat (msec) : 10=0.02%, 20=2.35%, 50=97.53%, 100=0.10% 00:29:02.444 cpu : usr=98.66%, sys=0.93%, ctx=16, majf=0, minf=47 00:29:02.444 IO depths : 1=1.0%, 2=2.1%, 4=10.1%, 8=73.8%, 16=13.0%, 32=0.0%, >=64=0.0% 00:29:02.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.444 complete : 0=0.0%, 4=90.7%, 8=5.1%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 issued rwts: total=5756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.445 filename0: (groupid=0, jobs=1): err= 0: pid=3235094: Wed Jul 24 21:54:08 2024 00:29:02.445 read: IOPS=600, BW=2403KiB/s (2460kB/s)(23.5MiB/10005msec) 00:29:02.445 slat (nsec): min=5476, max=98362, avg=34731.69, stdev=19477.81 00:29:02.445 clat (usec): min=4660, max=47921, avg=26403.63, stdev=4259.42 00:29:02.445 lat (usec): min=4672, max=47973, avg=26438.37, stdev=4258.91 00:29:02.445 clat percentiles (usec): 00:29:02.445 | 1.00th=[15401], 5.00th=[22676], 10.00th=[23725], 20.00th=[24249], 00:29:02.445 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:29:02.445 | 70.00th=[26608], 80.00th=[27395], 90.00th=[31327], 95.00th=[35914], 00:29:02.445 | 99.00th=[41157], 99.50th=[44303], 99.90th=[46924], 99.95th=[46924], 00:29:02.445 | 99.99th=[47973] 00:29:02.445 bw ( KiB/s): min= 2176, max= 2560, per=4.11%, avg=2390.68, stdev=109.14, samples=19 00:29:02.445 iops : min= 544, max= 640, avg=597.63, stdev=27.30, samples=19 00:29:02.445 lat (msec) : 10=0.37%, 20=2.31%, 50=97.32% 00:29:02.445 cpu : usr=96.46%, sys=1.71%, ctx=229, majf=0, minf=38 00:29:02.445 IO depths : 1=2.1%, 2=4.5%, 4=14.7%, 8=66.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:29:02.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 complete : 0=0.0%, 4=92.0%, 8=3.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 issued rwts: total=6010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.445 filename0: (groupid=0, jobs=1): err= 0: pid=3235095: Wed Jul 24 21:54:08 2024 00:29:02.445 read: IOPS=616, BW=2465KiB/s (2524kB/s)(24.1MiB/10010msec) 00:29:02.445 slat (usec): min=6, max=110, avg=36.87, stdev=20.54 00:29:02.445 clat (usec): min=13828, max=48020, avg=25674.70, stdev=2818.31 00:29:02.445 lat (usec): min=13838, max=48029, avg=25711.58, stdev=2817.56 00:29:02.445 clat percentiles (usec): 00:29:02.445 | 1.00th=[17171], 5.00th=[22938], 10.00th=[23462], 20.00th=[24249], 00:29:02.445 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[25822], 00:29:02.445 | 70.00th=[26084], 80.00th=[26870], 90.00th=[27919], 95.00th=[29754], 00:29:02.445 | 99.00th=[37487], 99.50th=[40109], 99.90th=[41681], 99.95th=[47973], 00:29:02.445 | 99.99th=[47973] 00:29:02.445 bw ( KiB/s): min= 2304, max= 2584, per=4.25%, avg=2470.42, stdev=72.55, samples=19 00:29:02.445 iops : min= 576, max= 646, avg=617.58, stdev=18.11, samples=19 00:29:02.445 lat (msec) : 20=2.45%, 50=97.55% 00:29:02.445 cpu : usr=98.58%, sys=0.84%, ctx=79, majf=0, minf=31 00:29:02.445 IO depths : 1=2.9%, 2=6.0%, 4=18.0%, 8=63.0%, 16=10.1%, 32=0.0%, >=64=0.0% 00:29:02.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 complete : 0=0.0%, 4=92.8%, 8=1.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 issued rwts: total=6168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.445 filename1: (groupid=0, jobs=1): err= 0: pid=3235096: Wed Jul 24 21:54:08 2024 00:29:02.445 read: IOPS=619, BW=2477KiB/s (2536kB/s)(24.2MiB/10003msec) 00:29:02.445 slat (usec): min=6, max=313, avg=34.75, stdev=18.62 00:29:02.445 clat (usec): min=8132, max=45994, avg=25554.99, stdev=3214.57 00:29:02.445 lat (usec): min=8141, max=46002, avg=25589.74, stdev=3215.01 00:29:02.445 clat percentiles (usec): 00:29:02.445 | 1.00th=[15270], 5.00th=[22676], 10.00th=[23725], 20.00th=[24249], 00:29:02.445 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:29:02.445 | 70.00th=[26084], 80.00th=[26870], 90.00th=[27919], 95.00th=[29230], 00:29:02.445 | 99.00th=[40633], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:29:02.445 | 99.99th=[45876] 00:29:02.445 bw ( KiB/s): min= 2299, max= 2880, per=4.26%, avg=2473.00, stdev=123.55, samples=19 00:29:02.445 iops : min= 574, max= 720, avg=618.21, stdev=30.95, samples=19 00:29:02.445 lat (msec) : 10=0.16%, 20=3.41%, 50=96.43% 00:29:02.445 cpu : usr=93.60%, sys=3.00%, ctx=199, majf=0, minf=33 00:29:02.445 IO depths : 1=5.6%, 2=11.2%, 4=23.1%, 8=53.1%, 16=7.0%, 32=0.0%, >=64=0.0% 00:29:02.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 issued rwts: total=6194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.445 filename1: (groupid=0, jobs=1): err= 0: pid=3235097: Wed Jul 24 21:54:08 2024 00:29:02.445 read: IOPS=621, BW=2487KiB/s (2546kB/s)(24.3MiB/10010msec) 00:29:02.445 slat (nsec): min=6137, max=85071, avg=34462.65, stdev=15855.68 00:29:02.445 clat (usec): min=9597, max=48405, avg=25450.28, stdev=2118.35 00:29:02.445 lat (usec): min=9610, max=48423, avg=25484.74, stdev=2117.00 00:29:02.445 clat percentiles (usec): 00:29:02.445 | 1.00th=[18482], 5.00th=[23462], 10.00th=[23987], 20.00th=[24249], 00:29:02.445 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:29:02.445 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[28181], 00:29:02.445 | 99.00th=[31851], 99.50th=[34866], 99.90th=[39584], 99.95th=[39584], 00:29:02.445 | 99.99th=[48497] 00:29:02.445 bw ( KiB/s): min= 2304, max= 2560, per=4.27%, avg=2482.21, stdev=89.96, samples=19 00:29:02.445 iops : min= 576, max= 640, avg=620.53, stdev=22.47, samples=19 00:29:02.445 lat (msec) : 10=0.11%, 20=1.08%, 50=98.81% 00:29:02.445 cpu : usr=98.78%, sys=0.79%, ctx=58, majf=0, minf=41 00:29:02.445 IO depths : 1=4.6%, 2=9.3%, 4=20.1%, 8=57.0%, 16=9.0%, 32=0.0%, >=64=0.0% 00:29:02.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 complete : 0=0.0%, 4=93.2%, 8=2.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 issued rwts: total=6223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.445 filename1: (groupid=0, jobs=1): err= 0: pid=3235098: Wed Jul 24 21:54:08 2024 00:29:02.445 read: IOPS=639, BW=2560KiB/s (2621kB/s)(25.0MiB/10011msec) 00:29:02.445 slat (nsec): min=6380, max=68432, avg=16579.90, stdev=11061.28 00:29:02.445 clat (usec): min=5161, max=47284, avg=24879.59, stdev=3641.23 00:29:02.445 lat (usec): min=5169, max=47330, avg=24896.17, stdev=3642.30 00:29:02.445 clat percentiles (usec): 00:29:02.445 | 1.00th=[10290], 5.00th=[16581], 10.00th=[22938], 20.00th=[24249], 00:29:02.445 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:29:02.445 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27395], 95.00th=[28181], 00:29:02.445 | 99.00th=[34341], 99.50th=[38011], 99.90th=[42206], 99.95th=[42206], 00:29:02.445 | 99.99th=[47449] 00:29:02.445 bw ( KiB/s): min= 2432, max= 2784, per=4.40%, avg=2556.84, stdev=104.34, samples=19 00:29:02.445 iops : min= 608, max= 696, avg=639.16, stdev=26.07, samples=19 00:29:02.445 lat (msec) : 10=0.87%, 20=6.81%, 50=92.32% 00:29:02.445 cpu : usr=98.93%, sys=0.71%, ctx=19, majf=0, minf=45 00:29:02.445 IO depths : 1=4.0%, 2=8.1%, 4=20.0%, 8=59.1%, 16=8.8%, 32=0.0%, >=64=0.0% 00:29:02.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 complete : 0=0.0%, 4=92.7%, 8=1.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 issued rwts: total=6406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.445 filename1: (groupid=0, jobs=1): err= 0: pid=3235099: Wed Jul 24 21:54:08 2024 00:29:02.445 read: IOPS=606, BW=2426KiB/s (2484kB/s)(23.7MiB/10009msec) 00:29:02.445 slat (usec): min=6, max=308, avg=30.69, stdev=17.77 00:29:02.445 clat (usec): min=9390, max=47916, avg=26178.40, stdev=3947.64 00:29:02.445 lat (usec): min=9405, max=47934, avg=26209.09, stdev=3946.88 00:29:02.445 clat percentiles (usec): 00:29:02.445 | 1.00th=[15926], 5.00th=[22152], 10.00th=[23725], 20.00th=[24249], 00:29:02.445 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:29:02.445 | 70.00th=[26346], 80.00th=[27395], 90.00th=[29754], 95.00th=[34866], 00:29:02.445 | 99.00th=[39584], 99.50th=[40633], 99.90th=[47973], 99.95th=[47973], 00:29:02.445 | 99.99th=[47973] 00:29:02.445 bw ( KiB/s): min= 1920, max= 2592, per=4.15%, avg=2414.21, stdev=172.54, samples=19 00:29:02.445 iops : min= 480, max= 648, avg=603.53, stdev=43.11, samples=19 00:29:02.445 lat (msec) : 10=0.26%, 20=3.10%, 50=96.64% 00:29:02.445 cpu : usr=95.06%, sys=2.27%, ctx=173, majf=0, minf=29 00:29:02.445 IO depths : 1=1.7%, 2=3.9%, 4=13.2%, 8=68.3%, 16=12.9%, 32=0.0%, >=64=0.0% 00:29:02.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 complete : 0=0.0%, 4=92.0%, 8=4.4%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 issued rwts: total=6070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.445 filename1: (groupid=0, jobs=1): err= 0: pid=3235100: Wed Jul 24 21:54:08 2024 00:29:02.445 read: IOPS=622, BW=2488KiB/s (2548kB/s)(24.3MiB/10006msec) 00:29:02.445 slat (nsec): min=6594, max=88432, avg=38833.77, stdev=15382.57 00:29:02.445 clat (usec): min=11060, max=38905, avg=25396.65, stdev=2121.01 00:29:02.445 lat (usec): min=11085, max=38951, avg=25435.48, stdev=2121.07 00:29:02.445 clat percentiles (usec): 00:29:02.445 | 1.00th=[17957], 5.00th=[23462], 10.00th=[23725], 20.00th=[24249], 00:29:02.445 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:29:02.445 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:29:02.445 | 99.00th=[33817], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:29:02.445 | 99.99th=[39060] 00:29:02.445 bw ( KiB/s): min= 2304, max= 2560, per=4.26%, avg=2478.26, stdev=85.02, samples=19 00:29:02.445 iops : min= 576, max= 640, avg=619.47, stdev=21.22, samples=19 00:29:02.445 lat (msec) : 20=1.59%, 50=98.41% 00:29:02.445 cpu : usr=97.60%, sys=1.26%, ctx=197, majf=0, minf=39 00:29:02.445 IO depths : 1=5.4%, 2=11.0%, 4=24.0%, 8=52.5%, 16=7.1%, 32=0.0%, >=64=0.0% 00:29:02.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.445 issued rwts: total=6224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.445 filename1: (groupid=0, jobs=1): err= 0: pid=3235101: Wed Jul 24 21:54:08 2024 00:29:02.445 read: IOPS=620, BW=2481KiB/s (2540kB/s)(24.2MiB/10003msec) 00:29:02.445 slat (usec): min=6, max=100, avg=33.89, stdev=18.28 00:29:02.445 clat (usec): min=7345, max=47650, avg=25534.92, stdev=4006.30 00:29:02.445 lat (usec): min=7359, max=47658, avg=25568.82, stdev=4006.85 00:29:02.445 clat percentiles (usec): 00:29:02.445 | 1.00th=[13566], 5.00th=[18482], 10.00th=[23200], 20.00th=[24249], 00:29:02.446 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25822], 00:29:02.446 | 70.00th=[26084], 80.00th=[26870], 90.00th=[28181], 95.00th=[32900], 00:29:02.446 | 99.00th=[41157], 99.50th=[42206], 99.90th=[46400], 99.95th=[47449], 00:29:02.446 | 99.99th=[47449] 00:29:02.446 bw ( KiB/s): min= 2304, max= 2688, per=4.26%, avg=2475.47, stdev=85.66, samples=19 00:29:02.446 iops : min= 576, max= 672, avg=618.84, stdev=21.39, samples=19 00:29:02.446 lat (msec) : 10=0.39%, 20=5.88%, 50=93.73% 00:29:02.446 cpu : usr=98.53%, sys=0.91%, ctx=61, majf=0, minf=34 00:29:02.446 IO depths : 1=3.2%, 2=6.5%, 4=19.1%, 8=61.3%, 16=10.0%, 32=0.0%, >=64=0.0% 00:29:02.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.446 complete : 0=0.0%, 4=93.3%, 8=1.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.446 issued rwts: total=6204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.446 filename1: (groupid=0, jobs=1): err= 0: pid=3235102: Wed Jul 24 21:54:08 2024 00:29:02.446 read: IOPS=622, BW=2489KiB/s (2549kB/s)(24.3MiB/10014msec) 00:29:02.446 slat (usec): min=6, max=491, avg=30.79, stdev=21.42 00:29:02.446 clat (usec): min=7817, max=47078, avg=25475.63, stdev=3667.00 00:29:02.446 lat (usec): min=7826, max=47110, avg=25506.42, stdev=3669.45 00:29:02.446 clat percentiles (usec): 00:29:02.446 | 1.00th=[13829], 5.00th=[19006], 10.00th=[23462], 20.00th=[24249], 00:29:02.446 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:29:02.446 | 70.00th=[26084], 80.00th=[26870], 90.00th=[27919], 95.00th=[31065], 00:29:02.446 | 99.00th=[39060], 99.50th=[41681], 99.90th=[46400], 99.95th=[46924], 00:29:02.446 | 99.99th=[46924] 00:29:02.446 bw ( KiB/s): min= 2304, max= 2688, per=4.28%, avg=2487.70, stdev=100.51, samples=20 00:29:02.446 iops : min= 576, max= 672, avg=621.90, stdev=25.11, samples=20 00:29:02.446 lat (msec) : 10=0.13%, 20=5.23%, 50=94.64% 00:29:02.446 cpu : usr=95.35%, sys=2.03%, ctx=55, majf=0, minf=51 00:29:02.446 IO depths : 1=4.0%, 2=8.1%, 4=19.0%, 8=59.7%, 16=9.3%, 32=0.0%, >=64=0.0% 00:29:02.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.446 complete : 0=0.0%, 4=93.0%, 8=1.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.446 issued rwts: total=6232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.446 filename1: (groupid=0, jobs=1): err= 0: pid=3235103: Wed Jul 24 21:54:08 2024 00:29:02.446 read: IOPS=575, BW=2302KiB/s (2357kB/s)(22.5MiB/10005msec) 00:29:02.446 slat (nsec): min=5893, max=98120, avg=28322.03, stdev=18028.18 00:29:02.446 clat (usec): min=6978, max=50283, avg=27653.48, stdev=5305.55 00:29:02.446 lat (usec): min=6997, max=50309, avg=27681.80, stdev=5303.74 00:29:02.446 clat percentiles (usec): 00:29:02.446 | 1.00th=[14484], 5.00th=[23200], 10.00th=[23987], 20.00th=[24511], 00:29:02.446 | 30.00th=[25035], 40.00th=[25560], 50.00th=[25822], 60.00th=[26870], 00:29:02.446 | 70.00th=[27919], 80.00th=[30540], 90.00th=[35914], 95.00th=[39060], 00:29:02.446 | 99.00th=[43254], 99.50th=[45876], 99.90th=[49546], 99.95th=[50070], 00:29:02.446 | 99.99th=[50070] 00:29:02.446 bw ( KiB/s): min= 2064, max= 2544, per=3.94%, avg=2292.16, stdev=114.20, samples=19 00:29:02.446 iops : min= 516, max= 636, avg=573.00, stdev=28.56, samples=19 00:29:02.446 lat (msec) : 10=0.28%, 20=2.78%, 50=96.87%, 100=0.07% 00:29:02.446 cpu : usr=97.22%, sys=1.50%, ctx=32, majf=0, minf=24 00:29:02.446 IO depths : 1=0.1%, 2=0.6%, 4=7.7%, 8=77.1%, 16=14.4%, 32=0.0%, >=64=0.0% 00:29:02.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.446 complete : 0=0.0%, 4=90.3%, 8=5.8%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.446 issued rwts: total=5758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.446 filename2: (groupid=0, jobs=1): err= 0: pid=3235104: Wed Jul 24 21:54:08 2024 00:29:02.446 read: IOPS=622, BW=2489KiB/s (2549kB/s)(24.3MiB/10003msec) 00:29:02.446 slat (usec): min=6, max=431, avg=37.88, stdev=19.71 00:29:02.446 clat (usec): min=7078, max=44295, avg=25387.64, stdev=2111.04 00:29:02.446 lat (usec): min=7096, max=44328, avg=25425.52, stdev=2110.36 00:29:02.446 clat percentiles (usec): 00:29:02.446 | 1.00th=[21103], 5.00th=[23200], 10.00th=[23725], 20.00th=[24249], 00:29:02.446 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:29:02.446 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27132], 95.00th=[27919], 00:29:02.446 | 99.00th=[30540], 99.50th=[35914], 99.90th=[43254], 99.95th=[44303], 00:29:02.446 | 99.99th=[44303] 00:29:02.446 bw ( KiB/s): min= 2304, max= 2560, per=4.27%, avg=2483.89, stdev=78.95, samples=19 00:29:02.446 iops : min= 576, max= 640, avg=620.95, stdev=19.71, samples=19 00:29:02.446 lat (msec) : 10=0.13%, 20=0.69%, 50=99.18% 00:29:02.446 cpu : usr=96.47%, sys=1.68%, ctx=60, majf=0, minf=46 00:29:02.446 IO depths : 1=5.8%, 2=11.6%, 4=24.2%, 8=51.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:29:02.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.446 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.446 issued rwts: total=6224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.446 filename2: (groupid=0, jobs=1): err= 0: pid=3235105: Wed Jul 24 21:54:08 2024 00:29:02.446 read: IOPS=580, BW=2323KiB/s (2379kB/s)(22.7MiB/10010msec) 00:29:02.446 slat (usec): min=6, max=107, avg=31.87, stdev=20.20 00:29:02.446 clat (usec): min=6989, max=51525, avg=27382.58, stdev=5082.25 00:29:02.446 lat (usec): min=7006, max=51569, avg=27414.45, stdev=5079.93 00:29:02.446 clat percentiles (usec): 00:29:02.446 | 1.00th=[16057], 5.00th=[22414], 10.00th=[23725], 20.00th=[24511], 00:29:02.446 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:29:02.446 | 70.00th=[27657], 80.00th=[30016], 90.00th=[35390], 95.00th=[38536], 00:29:02.446 | 99.00th=[44303], 99.50th=[46400], 99.90th=[51119], 99.95th=[51643], 00:29:02.446 | 99.99th=[51643] 00:29:02.446 bw ( KiB/s): min= 2096, max= 2480, per=3.99%, avg=2317.63, stdev=103.28, samples=19 00:29:02.446 iops : min= 524, max= 620, avg=579.37, stdev=25.84, samples=19 00:29:02.446 lat (msec) : 10=0.03%, 20=2.63%, 50=97.13%, 100=0.21% 00:29:02.446 cpu : usr=98.70%, sys=0.85%, ctx=65, majf=0, minf=45 00:29:02.446 IO depths : 1=0.4%, 2=0.8%, 4=8.3%, 8=76.4%, 16=14.2%, 32=0.0%, >=64=0.0% 00:29:02.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.446 complete : 0=0.0%, 4=90.4%, 8=5.7%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.446 issued rwts: total=5813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.446 filename2: (groupid=0, jobs=1): err= 0: pid=3235106: Wed Jul 24 21:54:08 2024 00:29:02.446 read: IOPS=612, BW=2451KiB/s (2510kB/s)(24.0MiB/10024msec) 00:29:02.446 slat (usec): min=6, max=110, avg=29.66, stdev=19.09 00:29:02.446 clat (usec): min=10752, max=49903, avg=25929.99, stdev=3452.52 00:29:02.446 lat (usec): min=10762, max=49958, avg=25959.64, stdev=3452.59 00:29:02.446 clat percentiles (usec): 00:29:02.446 | 1.00th=[15795], 5.00th=[22676], 10.00th=[23725], 20.00th=[24511], 00:29:02.446 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:29:02.446 | 70.00th=[26346], 80.00th=[27132], 90.00th=[28181], 95.00th=[30540], 00:29:02.446 | 99.00th=[42206], 99.50th=[43779], 99.90th=[49021], 99.95th=[50070], 00:29:02.446 | 99.99th=[50070] 00:29:02.446 bw ( KiB/s): min= 2000, max= 2576, per=4.22%, avg=2450.15, stdev=130.34, samples=20 00:29:02.446 iops : min= 500, max= 644, avg=612.50, stdev=32.59, samples=20 00:29:02.446 lat (msec) : 20=2.43%, 50=97.57% 00:29:02.446 cpu : usr=98.55%, sys=0.95%, ctx=33, majf=0, minf=34 00:29:02.446 IO depths : 1=2.0%, 2=4.5%, 4=13.7%, 8=68.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:29:02.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.446 complete : 0=0.0%, 4=91.7%, 8=3.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.446 issued rwts: total=6142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.446 filename2: (groupid=0, jobs=1): err= 0: pid=3235107: Wed Jul 24 21:54:08 2024 00:29:02.446 read: IOPS=566, BW=2267KiB/s (2322kB/s)(22.2MiB/10004msec) 00:29:02.446 slat (nsec): min=6313, max=81901, avg=28669.71, stdev=17036.59 00:29:02.446 clat (usec): min=4590, max=51167, avg=28075.57, stdev=5830.48 00:29:02.446 lat (usec): min=4599, max=51200, avg=28104.24, stdev=5828.87 00:29:02.446 clat percentiles (usec): 00:29:02.446 | 1.00th=[13566], 5.00th=[22152], 10.00th=[23987], 20.00th=[24773], 00:29:02.446 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[27395], 00:29:02.446 | 70.00th=[28967], 80.00th=[32113], 90.00th=[36439], 95.00th=[39584], 00:29:02.446 | 99.00th=[46400], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 00:29:02.446 | 99.99th=[51119] 00:29:02.446 bw ( KiB/s): min= 1976, max= 2432, per=3.87%, avg=2250.21, stdev=120.69, samples=19 00:29:02.446 iops : min= 494, max= 608, avg=562.47, stdev=30.20, samples=19 00:29:02.446 lat (msec) : 10=0.53%, 20=2.79%, 50=96.35%, 100=0.34% 00:29:02.446 cpu : usr=98.14%, sys=1.11%, ctx=94, majf=0, minf=54 00:29:02.446 IO depths : 1=0.3%, 2=0.7%, 4=8.4%, 8=76.4%, 16=14.2%, 32=0.0%, >=64=0.0% 00:29:02.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.446 complete : 0=0.0%, 4=90.4%, 8=5.8%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.446 issued rwts: total=5671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.446 filename2: (groupid=0, jobs=1): err= 0: pid=3235108: Wed Jul 24 21:54:08 2024 00:29:02.446 read: IOPS=608, BW=2433KiB/s (2491kB/s)(23.8MiB/10008msec) 00:29:02.446 slat (nsec): min=6536, max=83369, avg=29092.24, stdev=15955.18 00:29:02.446 clat (usec): min=4821, max=49881, avg=26105.86, stdev=3841.86 00:29:02.446 lat (usec): min=4852, max=49890, avg=26134.95, stdev=3841.50 00:29:02.446 clat percentiles (usec): 00:29:02.446 | 1.00th=[14222], 5.00th=[22676], 10.00th=[23725], 20.00th=[24249], 00:29:02.446 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:29:02.446 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28967], 95.00th=[34341], 00:29:02.446 | 99.00th=[39584], 99.50th=[41157], 99.90th=[49021], 99.95th=[49021], 00:29:02.446 | 99.99th=[50070] 00:29:02.446 bw ( KiB/s): min= 2248, max= 2656, per=4.17%, avg=2425.00, stdev=103.63, samples=19 00:29:02.446 iops : min= 562, max= 664, avg=606.21, stdev=25.90, samples=19 00:29:02.446 lat (msec) : 10=0.07%, 20=2.99%, 50=96.94% 00:29:02.446 cpu : usr=90.53%, sys=4.20%, ctx=218, majf=0, minf=29 00:29:02.446 IO depths : 1=2.7%, 2=5.6%, 4=14.8%, 8=65.6%, 16=11.3%, 32=0.0%, >=64=0.0% 00:29:02.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.447 complete : 0=0.0%, 4=91.7%, 8=4.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.447 issued rwts: total=6087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.447 filename2: (groupid=0, jobs=1): err= 0: pid=3235109: Wed Jul 24 21:54:08 2024 00:29:02.447 read: IOPS=582, BW=2331KiB/s (2387kB/s)(22.8MiB/10003msec) 00:29:02.447 slat (usec): min=5, max=178, avg=33.25, stdev=20.42 00:29:02.447 clat (usec): min=6948, max=50142, avg=27263.44, stdev=4979.77 00:29:02.447 lat (usec): min=6972, max=50206, avg=27296.70, stdev=4977.62 00:29:02.447 clat percentiles (usec): 00:29:02.447 | 1.00th=[13829], 5.00th=[22676], 10.00th=[23725], 20.00th=[24511], 00:29:02.447 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:29:02.447 | 70.00th=[27657], 80.00th=[29492], 90.00th=[35390], 95.00th=[38011], 00:29:02.447 | 99.00th=[42206], 99.50th=[43254], 99.90th=[46924], 99.95th=[50070], 00:29:02.447 | 99.99th=[50070] 00:29:02.447 bw ( KiB/s): min= 2048, max= 2480, per=3.99%, avg=2320.16, stdev=100.02, samples=19 00:29:02.447 iops : min= 512, max= 620, avg=580.00, stdev=24.96, samples=19 00:29:02.447 lat (msec) : 10=0.41%, 20=2.33%, 50=97.20%, 100=0.05% 00:29:02.447 cpu : usr=98.03%, sys=0.97%, ctx=24, majf=0, minf=40 00:29:02.447 IO depths : 1=1.1%, 2=2.3%, 4=10.1%, 8=73.0%, 16=13.5%, 32=0.0%, >=64=0.0% 00:29:02.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.447 complete : 0=0.0%, 4=90.9%, 8=5.3%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.447 issued rwts: total=5830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.447 filename2: (groupid=0, jobs=1): err= 0: pid=3235110: Wed Jul 24 21:54:08 2024 00:29:02.447 read: IOPS=610, BW=2441KiB/s (2499kB/s)(23.8MiB/10003msec) 00:29:02.447 slat (usec): min=6, max=110, avg=34.43, stdev=19.89 00:29:02.447 clat (usec): min=6453, max=48622, avg=25990.17, stdev=4364.71 00:29:02.447 lat (usec): min=6491, max=48680, avg=26024.60, stdev=4366.30 00:29:02.447 clat percentiles (usec): 00:29:02.447 | 1.00th=[13698], 5.00th=[19530], 10.00th=[23200], 20.00th=[24249], 00:29:02.447 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[25822], 00:29:02.447 | 70.00th=[26346], 80.00th=[27395], 90.00th=[30016], 95.00th=[34866], 00:29:02.447 | 99.00th=[41681], 99.50th=[44827], 99.90th=[46400], 99.95th=[46400], 00:29:02.447 | 99.99th=[48497] 00:29:02.447 bw ( KiB/s): min= 2176, max= 2608, per=4.19%, avg=2434.26, stdev=111.82, samples=19 00:29:02.447 iops : min= 544, max= 652, avg=608.53, stdev=28.01, samples=19 00:29:02.447 lat (msec) : 10=0.23%, 20=4.98%, 50=94.79% 00:29:02.447 cpu : usr=98.98%, sys=0.58%, ctx=18, majf=0, minf=45 00:29:02.447 IO depths : 1=1.7%, 2=3.8%, 4=14.4%, 8=67.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:29:02.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.447 complete : 0=0.0%, 4=92.4%, 8=3.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.447 issued rwts: total=6104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.447 filename2: (groupid=0, jobs=1): err= 0: pid=3235111: Wed Jul 24 21:54:08 2024 00:29:02.447 read: IOPS=609, BW=2438KiB/s (2497kB/s)(23.8MiB/10003msec) 00:29:02.447 slat (usec): min=6, max=103, avg=35.83, stdev=21.12 00:29:02.447 clat (usec): min=3577, max=50952, avg=26046.69, stdev=4265.29 00:29:02.447 lat (usec): min=3584, max=50984, avg=26082.53, stdev=4265.42 00:29:02.447 clat percentiles (usec): 00:29:02.447 | 1.00th=[12125], 5.00th=[21890], 10.00th=[23462], 20.00th=[24249], 00:29:02.447 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[25822], 00:29:02.447 | 70.00th=[26608], 80.00th=[27132], 90.00th=[29754], 95.00th=[34341], 00:29:02.447 | 99.00th=[40633], 99.50th=[42730], 99.90th=[50594], 99.95th=[50594], 00:29:02.447 | 99.99th=[51119] 00:29:02.447 bw ( KiB/s): min= 2304, max= 2576, per=4.17%, avg=2425.42, stdev=74.91, samples=19 00:29:02.447 iops : min= 576, max= 644, avg=606.32, stdev=18.78, samples=19 00:29:02.447 lat (msec) : 4=0.10%, 10=0.38%, 20=3.38%, 50=95.88%, 100=0.26% 00:29:02.447 cpu : usr=98.99%, sys=0.60%, ctx=20, majf=0, minf=43 00:29:02.447 IO depths : 1=0.7%, 2=1.5%, 4=10.1%, 8=74.1%, 16=13.7%, 32=0.0%, >=64=0.0% 00:29:02.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.447 complete : 0=0.0%, 4=90.8%, 8=5.3%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.447 issued rwts: total=6097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.447 00:29:02.447 Run status group 0 (all jobs): 00:29:02.447 READ: bw=56.8MiB/s (59.5MB/s), 2267KiB/s-2560KiB/s (2322kB/s-2621kB/s), io=569MiB (597MB), run=10003-10024msec 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.447 bdev_null0 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.447 [2024-07-24 21:54:09.243903] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:02.447 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.448 bdev_null1 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:02.448 { 00:29:02.448 "params": { 00:29:02.448 "name": "Nvme$subsystem", 00:29:02.448 "trtype": "$TEST_TRANSPORT", 00:29:02.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:02.448 "adrfam": "ipv4", 00:29:02.448 "trsvcid": "$NVMF_PORT", 00:29:02.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:02.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:02.448 "hdgst": ${hdgst:-false}, 00:29:02.448 "ddgst": ${ddgst:-false} 00:29:02.448 }, 00:29:02.448 "method": "bdev_nvme_attach_controller" 00:29:02.448 } 00:29:02.448 EOF 00:29:02.448 )") 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:02.448 { 00:29:02.448 "params": { 00:29:02.448 "name": "Nvme$subsystem", 00:29:02.448 "trtype": "$TEST_TRANSPORT", 00:29:02.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:02.448 "adrfam": "ipv4", 00:29:02.448 "trsvcid": "$NVMF_PORT", 00:29:02.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:02.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:02.448 "hdgst": ${hdgst:-false}, 00:29:02.448 "ddgst": ${ddgst:-false} 00:29:02.448 }, 00:29:02.448 "method": "bdev_nvme_attach_controller" 00:29:02.448 } 00:29:02.448 EOF 00:29:02.448 )") 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:02.448 "params": { 00:29:02.448 "name": "Nvme0", 00:29:02.448 "trtype": "tcp", 00:29:02.448 "traddr": "10.0.0.2", 00:29:02.448 "adrfam": "ipv4", 00:29:02.448 "trsvcid": "4420", 00:29:02.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:02.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:02.448 "hdgst": false, 00:29:02.448 "ddgst": false 00:29:02.448 }, 00:29:02.448 "method": "bdev_nvme_attach_controller" 00:29:02.448 },{ 00:29:02.448 "params": { 00:29:02.448 "name": "Nvme1", 00:29:02.448 "trtype": "tcp", 00:29:02.448 "traddr": "10.0.0.2", 00:29:02.448 "adrfam": "ipv4", 00:29:02.448 "trsvcid": "4420", 00:29:02.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:02.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:02.448 "hdgst": false, 00:29:02.448 "ddgst": false 00:29:02.448 }, 00:29:02.448 "method": "bdev_nvme_attach_controller" 00:29:02.448 }' 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:02.448 21:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:02.448 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:02.448 ... 00:29:02.448 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:02.448 ... 00:29:02.448 fio-3.35 00:29:02.448 Starting 4 threads 00:29:02.448 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.719 00:29:07.719 filename0: (groupid=0, jobs=1): err= 0: pid=3237567: Wed Jul 24 21:54:15 2024 00:29:07.719 read: IOPS=2627, BW=20.5MiB/s (21.5MB/s)(103MiB/5003msec) 00:29:07.719 slat (nsec): min=6209, max=67482, avg=14848.88, stdev=9424.71 00:29:07.719 clat (usec): min=1723, max=6354, avg=3006.51, stdev=415.76 00:29:07.719 lat (usec): min=1731, max=6388, avg=3021.36, stdev=415.76 00:29:07.719 clat percentiles (usec): 00:29:07.719 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2671], 00:29:07.719 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 3032], 60.00th=[ 3064], 00:29:07.719 | 70.00th=[ 3163], 80.00th=[ 3326], 90.00th=[ 3523], 95.00th=[ 3687], 00:29:07.719 | 99.00th=[ 4113], 99.50th=[ 4293], 99.90th=[ 4883], 99.95th=[ 6063], 00:29:07.719 | 99.99th=[ 6259] 00:29:07.719 bw ( KiB/s): min=20496, max=21408, per=25.34%, avg=21011.56, stdev=307.85, samples=9 00:29:07.719 iops : min= 2562, max= 2676, avg=2626.44, stdev=38.48, samples=9 00:29:07.719 lat (msec) : 2=0.37%, 4=98.09%, 10=1.54% 00:29:07.719 cpu : usr=97.40%, sys=2.24%, ctx=7, majf=0, minf=104 00:29:07.719 IO depths : 1=0.1%, 2=1.2%, 4=65.8%, 8=33.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:07.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.719 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.719 issued rwts: total=13147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.719 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:07.719 filename0: (groupid=0, jobs=1): err= 0: pid=3237568: Wed Jul 24 21:54:15 2024 00:29:07.719 read: IOPS=2568, BW=20.1MiB/s (21.0MB/s)(100MiB/5001msec) 00:29:07.719 slat (nsec): min=6062, max=86024, avg=11611.77, stdev=6601.27 00:29:07.719 clat (usec): min=1819, max=5088, avg=3085.72, stdev=410.55 00:29:07.719 lat (usec): min=1826, max=5110, avg=3097.34, stdev=410.38 00:29:07.719 clat percentiles (usec): 00:29:07.719 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2769], 00:29:07.719 | 30.00th=[ 2900], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3130], 00:29:07.719 | 70.00th=[ 3261], 80.00th=[ 3392], 90.00th=[ 3589], 95.00th=[ 3818], 00:29:07.719 | 99.00th=[ 4178], 99.50th=[ 4359], 99.90th=[ 4752], 99.95th=[ 4817], 00:29:07.719 | 99.99th=[ 4948] 00:29:07.719 bw ( KiB/s): min=20288, max=20912, per=24.78%, avg=20547.56, stdev=201.40, samples=9 00:29:07.719 iops : min= 2536, max= 2614, avg=2568.44, stdev=25.17, samples=9 00:29:07.719 lat (msec) : 2=0.16%, 4=97.52%, 10=2.32% 00:29:07.719 cpu : usr=97.34%, sys=2.28%, ctx=13, majf=0, minf=78 00:29:07.719 IO depths : 1=0.1%, 2=1.1%, 4=65.7%, 8=33.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:07.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.719 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.719 issued rwts: total=12846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.719 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:07.719 filename1: (groupid=0, jobs=1): err= 0: pid=3237569: Wed Jul 24 21:54:15 2024 00:29:07.719 read: IOPS=2605, BW=20.4MiB/s (21.3MB/s)(102MiB/5002msec) 00:29:07.719 slat (nsec): min=6001, max=52005, avg=11668.21, stdev=6444.28 00:29:07.719 clat (usec): min=1479, max=5666, avg=3040.38, stdev=416.13 00:29:07.719 lat (usec): min=1486, max=5692, avg=3052.05, stdev=416.07 00:29:07.719 clat percentiles (usec): 00:29:07.719 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2704], 00:29:07.719 | 30.00th=[ 2835], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3097], 00:29:07.719 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3556], 95.00th=[ 3720], 00:29:07.719 | 99.00th=[ 4178], 99.50th=[ 4293], 99.90th=[ 4686], 99.95th=[ 5342], 00:29:07.719 | 99.99th=[ 5604] 00:29:07.719 bw ( KiB/s): min=20432, max=21600, per=25.19%, avg=20888.89, stdev=410.43, samples=9 00:29:07.719 iops : min= 2554, max= 2700, avg=2611.11, stdev=51.30, samples=9 00:29:07.719 lat (msec) : 2=0.27%, 4=97.86%, 10=1.87% 00:29:07.719 cpu : usr=96.92%, sys=2.70%, ctx=7, majf=0, minf=92 00:29:07.719 IO depths : 1=0.1%, 2=1.0%, 4=65.9%, 8=33.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:07.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.719 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.719 issued rwts: total=13035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.719 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:07.719 filename1: (groupid=0, jobs=1): err= 0: pid=3237570: Wed Jul 24 21:54:15 2024 00:29:07.719 read: IOPS=2565, BW=20.0MiB/s (21.0MB/s)(100MiB/5002msec) 00:29:07.719 slat (nsec): min=6052, max=52018, avg=11037.67, stdev=6116.57 00:29:07.719 clat (usec): min=1875, max=7914, avg=3090.55, stdev=424.54 00:29:07.719 lat (usec): min=1882, max=7936, avg=3101.59, stdev=424.45 00:29:07.719 clat percentiles (usec): 00:29:07.719 | 1.00th=[ 2180], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2769], 00:29:07.719 | 30.00th=[ 2933], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3130], 00:29:07.719 | 70.00th=[ 3261], 80.00th=[ 3392], 90.00th=[ 3621], 95.00th=[ 3818], 00:29:07.719 | 99.00th=[ 4178], 99.50th=[ 4359], 99.90th=[ 5211], 99.95th=[ 6718], 00:29:07.719 | 99.99th=[ 6718] 00:29:07.719 bw ( KiB/s): min=20224, max=20816, per=24.77%, avg=20541.44, stdev=174.07, samples=9 00:29:07.719 iops : min= 2528, max= 2602, avg=2567.67, stdev=21.75, samples=9 00:29:07.719 lat (msec) : 2=0.19%, 4=97.57%, 10=2.24% 00:29:07.719 cpu : usr=96.78%, sys=2.88%, ctx=6, majf=0, minf=105 00:29:07.719 IO depths : 1=0.1%, 2=0.9%, 4=65.9%, 8=33.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:07.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.719 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.719 issued rwts: total=12833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.719 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:07.719 00:29:07.719 Run status group 0 (all jobs): 00:29:07.720 READ: bw=81.0MiB/s (84.9MB/s), 20.0MiB/s-20.5MiB/s (21.0MB/s-21.5MB/s), io=405MiB (425MB), run=5001-5003msec 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.720 00:29:07.720 real 0m23.959s 00:29:07.720 user 4m48.178s 00:29:07.720 sys 0m5.366s 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:07.720 21:54:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.720 ************************************ 00:29:07.720 END TEST fio_dif_rand_params 00:29:07.720 ************************************ 00:29:07.720 21:54:15 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:07.720 21:54:15 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:07.720 21:54:15 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:07.720 21:54:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:07.720 ************************************ 00:29:07.720 START TEST fio_dif_digest 00:29:07.720 ************************************ 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:07.720 bdev_null0 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:07.720 [2024-07-24 21:54:15.648225] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:07.720 { 00:29:07.720 "params": { 00:29:07.720 "name": "Nvme$subsystem", 00:29:07.720 "trtype": "$TEST_TRANSPORT", 00:29:07.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:07.720 "adrfam": "ipv4", 00:29:07.720 "trsvcid": "$NVMF_PORT", 00:29:07.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:07.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:07.720 "hdgst": ${hdgst:-false}, 00:29:07.720 "ddgst": ${ddgst:-false} 00:29:07.720 }, 00:29:07.720 "method": "bdev_nvme_attach_controller" 00:29:07.720 } 00:29:07.720 EOF 00:29:07.720 )") 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local sanitizers 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # shift 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local asan_lib= 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libasan 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:07.720 "params": { 00:29:07.720 "name": "Nvme0", 00:29:07.720 "trtype": "tcp", 00:29:07.720 "traddr": "10.0.0.2", 00:29:07.720 "adrfam": "ipv4", 00:29:07.720 "trsvcid": "4420", 00:29:07.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:07.720 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:07.720 "hdgst": true, 00:29:07.720 "ddgst": true 00:29:07.720 }, 00:29:07.720 "method": "bdev_nvme_attach_controller" 00:29:07.720 }' 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:29:07.720 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:07.721 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:07.721 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:07.721 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:07.721 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:07.721 21:54:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:07.979 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:07.979 ... 00:29:07.979 fio-3.35 00:29:07.979 Starting 3 threads 00:29:07.979 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.186 00:29:20.186 filename0: (groupid=0, jobs=1): err= 0: pid=3238700: Wed Jul 24 21:54:26 2024 00:29:20.186 read: IOPS=246, BW=30.8MiB/s (32.3MB/s)(309MiB/10009msec) 00:29:20.186 slat (nsec): min=4225, max=18777, avg=11195.95, stdev=2139.47 00:29:20.186 clat (usec): min=5304, max=94619, avg=12142.01, stdev=9557.55 00:29:20.186 lat (usec): min=5311, max=94629, avg=12153.21, stdev=9557.71 00:29:20.186 clat percentiles (usec): 00:29:20.186 | 1.00th=[ 5932], 5.00th=[ 6456], 10.00th=[ 7373], 20.00th=[ 8225], 00:29:20.186 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[10814], 00:29:20.186 | 70.00th=[11338], 80.00th=[11994], 90.00th=[13173], 95.00th=[17957], 00:29:20.186 | 99.00th=[55313], 99.50th=[55837], 99.90th=[58459], 99.95th=[94897], 00:29:20.186 | 99.99th=[94897] 00:29:20.186 bw ( KiB/s): min=22272, max=38656, per=35.85%, avg=31590.40, stdev=4254.94, samples=20 00:29:20.186 iops : min= 174, max= 302, avg=246.80, stdev=33.24, samples=20 00:29:20.186 lat (msec) : 10=43.64%, 20=51.46%, 50=0.85%, 100=4.05% 00:29:20.186 cpu : usr=95.74%, sys=3.91%, ctx=14, majf=0, minf=62 00:29:20.186 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:20.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.186 issued rwts: total=2470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:20.186 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:20.186 filename0: (groupid=0, jobs=1): err= 0: pid=3238701: Wed Jul 24 21:54:26 2024 00:29:20.186 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(267MiB/10049msec) 00:29:20.186 slat (usec): min=6, max=310, avg=11.59, stdev= 6.82 00:29:20.186 clat (usec): min=5429, max=61094, avg=14072.20, stdev=11175.06 00:29:20.186 lat (usec): min=5437, max=61119, avg=14083.79, stdev=11175.32 00:29:20.186 clat percentiles (usec): 00:29:20.186 | 1.00th=[ 5932], 5.00th=[ 6980], 10.00th=[ 7898], 20.00th=[ 9110], 00:29:20.186 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:29:20.186 | 70.00th=[12387], 80.00th=[13173], 90.00th=[15664], 95.00th=[52167], 00:29:20.186 | 99.00th=[56886], 99.50th=[57410], 99.90th=[60031], 99.95th=[60556], 00:29:20.186 | 99.99th=[61080] 00:29:20.186 bw ( KiB/s): min=23040, max=33280, per=31.02%, avg=27330.35, stdev=2687.99, samples=20 00:29:20.186 iops : min= 180, max= 260, avg=213.50, stdev=21.03, samples=20 00:29:20.186 lat (msec) : 10=27.52%, 20=65.37%, 50=0.42%, 100=6.69% 00:29:20.186 cpu : usr=95.47%, sys=4.13%, ctx=18, majf=0, minf=159 00:29:20.186 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:20.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.186 issued rwts: total=2137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:20.186 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:20.186 filename0: (groupid=0, jobs=1): err= 0: pid=3238702: Wed Jul 24 21:54:26 2024 00:29:20.186 read: IOPS=230, BW=28.9MiB/s (30.3MB/s)(289MiB/10005msec) 00:29:20.186 slat (nsec): min=6442, max=32443, avg=11340.30, stdev=2229.72 00:29:20.186 clat (msec): min=5, max=136, avg=12.97, stdev=10.63 00:29:20.186 lat (msec): min=5, max=136, avg=12.98, stdev=10.63 00:29:20.186 clat percentiles (msec): 00:29:20.186 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:29:20.186 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:29:20.186 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 51], 00:29:20.186 | 99.00th=[ 55], 99.50th=[ 56], 99.90th=[ 93], 99.95th=[ 97], 00:29:20.186 | 99.99th=[ 136] 00:29:20.186 bw ( KiB/s): min=21760, max=37376, per=33.24%, avg=29291.79, stdev=4482.49, samples=19 00:29:20.186 iops : min= 170, max= 292, avg=228.84, stdev=35.02, samples=19 00:29:20.186 lat (msec) : 10=35.70%, 20=58.37%, 50=0.65%, 100=5.24%, 250=0.04% 00:29:20.186 cpu : usr=95.39%, sys=4.18%, ctx=16, majf=0, minf=120 00:29:20.186 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:20.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.186 issued rwts: total=2311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:20.186 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:20.186 00:29:20.186 Run status group 0 (all jobs): 00:29:20.186 READ: bw=86.1MiB/s (90.2MB/s), 26.6MiB/s-30.8MiB/s (27.9MB/s-32.3MB/s), io=865MiB (907MB), run=10005-10049msec 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.186 00:29:20.186 real 0m11.163s 00:29:20.186 user 0m35.883s 00:29:20.186 sys 0m1.476s 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:20.186 21:54:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:20.186 ************************************ 00:29:20.186 END TEST fio_dif_digest 00:29:20.186 ************************************ 00:29:20.186 21:54:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:20.186 21:54:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:20.186 21:54:26 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:20.186 21:54:26 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:20.186 21:54:26 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:20.186 21:54:26 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:20.186 21:54:26 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:20.186 21:54:26 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:20.186 rmmod nvme_tcp 00:29:20.186 rmmod nvme_fabrics 00:29:20.186 rmmod nvme_keyring 00:29:20.186 21:54:26 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:20.186 21:54:26 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:20.186 21:54:26 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:20.186 21:54:26 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3229585 ']' 00:29:20.186 21:54:26 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3229585 00:29:20.186 21:54:26 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3229585 ']' 00:29:20.186 21:54:26 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3229585 00:29:20.186 21:54:26 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:29:20.186 21:54:26 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:20.186 21:54:26 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3229585 00:29:20.186 21:54:26 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:20.186 21:54:26 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:20.186 21:54:26 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3229585' 00:29:20.186 killing process with pid 3229585 00:29:20.186 21:54:26 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3229585 00:29:20.186 21:54:26 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3229585 00:29:20.186 21:54:27 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:20.186 21:54:27 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:21.620 Waiting for block devices as requested 00:29:21.620 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:21.620 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:21.620 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:21.620 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:21.620 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:21.620 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:21.879 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:21.879 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:21.879 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:21.879 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:22.138 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:22.138 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:22.138 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:22.398 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:22.398 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:22.398 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:22.398 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:22.657 21:54:30 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:22.657 21:54:30 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:22.657 21:54:30 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:22.657 21:54:30 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:22.657 21:54:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.657 21:54:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:22.657 21:54:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.559 21:54:32 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:24.559 00:29:24.559 real 1m12.902s 00:29:24.559 user 7m6.052s 00:29:24.559 sys 0m18.977s 00:29:24.559 21:54:32 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:24.559 21:54:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:24.559 ************************************ 00:29:24.559 END TEST nvmf_dif 00:29:24.559 ************************************ 00:29:24.559 21:54:32 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:24.559 21:54:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:24.559 21:54:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.559 21:54:32 -- common/autotest_common.sh@10 -- # set +x 00:29:24.559 ************************************ 00:29:24.559 START TEST nvmf_abort_qd_sizes 00:29:24.559 ************************************ 00:29:24.559 21:54:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:24.818 * Looking for test storage... 00:29:24.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:24.818 21:54:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:30.087 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:30.087 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:30.087 Found net devices under 0000:86:00.0: cvl_0_0 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:30.087 Found net devices under 0000:86:00.1: cvl_0_1 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.087 21:54:37 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.087 21:54:38 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.087 21:54:38 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:30.087 21:54:38 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.087 21:54:38 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.087 21:54:38 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.087 21:54:38 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:30.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:29:30.087 00:29:30.087 --- 10.0.0.2 ping statistics --- 00:29:30.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.087 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:29:30.087 21:54:38 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.477 ms 00:29:30.087 00:29:30.087 --- 10.0.0.1 ping statistics --- 00:29:30.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.087 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:29:30.087 21:54:38 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.087 21:54:38 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:30.087 21:54:38 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:30.087 21:54:38 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:32.721 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:32.721 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:33.659 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3246376 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3246376 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3246376 ']' 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.918 21:54:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:33.918 [2024-07-24 21:54:41.887065] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:29:33.918 [2024-07-24 21:54:41.887111] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.918 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.918 [2024-07-24 21:54:41.944995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.918 [2024-07-24 21:54:42.033095] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.918 [2024-07-24 21:54:42.033134] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.918 [2024-07-24 21:54:42.033142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.918 [2024-07-24 21:54:42.033148] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.918 [2024-07-24 21:54:42.033153] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.918 [2024-07-24 21:54:42.033200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.918 [2024-07-24 21:54:42.033222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.918 [2024-07-24 21:54:42.033241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:33.918 [2024-07-24 21:54:42.033242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:34.854 21:54:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:34.854 ************************************ 00:29:34.854 START TEST spdk_target_abort 00:29:34.854 ************************************ 00:29:34.854 21:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:29:34.854 21:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:34.854 21:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:29:34.854 21:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.854 21:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.148 spdk_targetn1 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.148 [2024-07-24 21:54:45.611660] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.148 [2024-07-24 21:54:45.652571] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:38.148 21:54:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:38.148 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.681 Initializing NVMe Controllers 00:29:40.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:40.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:40.681 Initialization complete. Launching workers. 00:29:40.681 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5504, failed: 0 00:29:40.681 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1888, failed to submit 3616 00:29:40.681 success 898, unsuccess 990, failed 0 00:29:40.681 21:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:40.681 21:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:40.940 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.224 Initializing NVMe Controllers 00:29:44.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:44.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:44.224 Initialization complete. Launching workers. 00:29:44.224 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8664, failed: 0 00:29:44.224 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1219, failed to submit 7445 00:29:44.224 success 318, unsuccess 901, failed 0 00:29:44.224 21:54:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:44.224 21:54:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:44.224 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.509 Initializing NVMe Controllers 00:29:47.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:47.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:47.509 Initialization complete. Launching workers. 00:29:47.509 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35545, failed: 0 00:29:47.509 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2795, failed to submit 32750 00:29:47.509 success 640, unsuccess 2155, failed 0 00:29:47.509 21:54:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:47.509 21:54:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.509 21:54:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.509 21:54:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.509 21:54:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:47.509 21:54:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.509 21:54:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3246376 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3246376 ']' 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3246376 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3246376 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3246376' 00:29:48.884 killing process with pid 3246376 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3246376 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3246376 00:29:48.884 00:29:48.884 real 0m14.094s 00:29:48.884 user 0m56.211s 00:29:48.884 sys 0m2.182s 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.884 ************************************ 00:29:48.884 END TEST spdk_target_abort 00:29:48.884 ************************************ 00:29:48.884 21:54:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:48.884 21:54:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:48.884 21:54:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:48.884 21:54:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:48.884 ************************************ 00:29:48.884 START TEST kernel_target_abort 00:29:48.884 ************************************ 00:29:48.884 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:48.885 21:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:52.178 Waiting for block devices as requested 00:29:52.178 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:52.178 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:52.178 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:52.178 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:52.178 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:52.178 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:52.178 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:52.178 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:52.178 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:52.178 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:52.437 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:52.437 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:52.437 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:52.696 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:52.696 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:52.696 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:52.696 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:52.954 No valid GPT data, bailing 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:52.954 00:29:52.954 Discovery Log Number of Records 2, Generation counter 2 00:29:52.954 =====Discovery Log Entry 0====== 00:29:52.954 trtype: tcp 00:29:52.954 adrfam: ipv4 00:29:52.954 subtype: current discovery subsystem 00:29:52.954 treq: not specified, sq flow control disable supported 00:29:52.954 portid: 1 00:29:52.954 trsvcid: 4420 00:29:52.954 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:52.954 traddr: 10.0.0.1 00:29:52.954 eflags: none 00:29:52.954 sectype: none 00:29:52.954 =====Discovery Log Entry 1====== 00:29:52.954 trtype: tcp 00:29:52.954 adrfam: ipv4 00:29:52.954 subtype: nvme subsystem 00:29:52.954 treq: not specified, sq flow control disable supported 00:29:52.954 portid: 1 00:29:52.954 trsvcid: 4420 00:29:52.954 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:52.954 traddr: 10.0.0.1 00:29:52.954 eflags: none 00:29:52.954 sectype: none 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:52.954 21:55:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:52.954 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.235 Initializing NVMe Controllers 00:29:56.235 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:56.235 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:56.235 Initialization complete. Launching workers. 00:29:56.235 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30424, failed: 0 00:29:56.235 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30424, failed to submit 0 00:29:56.235 success 0, unsuccess 30424, failed 0 00:29:56.235 21:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:56.235 21:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:56.235 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.519 Initializing NVMe Controllers 00:29:59.519 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:59.519 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:59.519 Initialization complete. Launching workers. 00:29:59.519 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63200, failed: 0 00:29:59.519 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15970, failed to submit 47230 00:29:59.519 success 0, unsuccess 15970, failed 0 00:29:59.519 21:55:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:59.519 21:55:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:59.519 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.049 Initializing NVMe Controllers 00:30:02.049 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:02.049 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:02.049 Initialization complete. Launching workers. 00:30:02.049 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62175, failed: 0 00:30:02.049 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15522, failed to submit 46653 00:30:02.049 success 0, unsuccess 15522, failed 0 00:30:02.049 21:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:02.049 21:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:02.049 21:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:02.049 21:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:02.049 21:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:02.049 21:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:02.307 21:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:02.307 21:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:02.307 21:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:02.307 21:55:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:04.840 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:04.840 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:05.412 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:05.412 00:30:05.412 real 0m16.508s 00:30:05.412 user 0m4.236s 00:30:05.412 sys 0m5.294s 00:30:05.412 21:55:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:05.412 21:55:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:05.412 ************************************ 00:30:05.412 END TEST kernel_target_abort 00:30:05.412 ************************************ 00:30:05.412 21:55:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:05.412 21:55:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:05.412 21:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:05.412 21:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:05.412 21:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:05.412 21:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:05.412 21:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:05.412 21:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:05.412 rmmod nvme_tcp 00:30:05.412 rmmod nvme_fabrics 00:30:05.412 rmmod nvme_keyring 00:30:05.412 21:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:05.671 21:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:05.671 21:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:05.671 21:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3246376 ']' 00:30:05.671 21:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3246376 00:30:05.671 21:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3246376 ']' 00:30:05.671 21:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3246376 00:30:05.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3246376) - No such process 00:30:05.671 21:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3246376 is not found' 00:30:05.671 Process with pid 3246376 is not found 00:30:05.671 21:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:05.671 21:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:08.206 Waiting for block devices as requested 00:30:08.206 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:08.206 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:08.206 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:08.206 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:08.206 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:08.465 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:08.465 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:08.465 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:08.465 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:08.723 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:08.723 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:08.723 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:08.982 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:08.982 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:08.982 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:08.982 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:09.242 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:09.242 21:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:09.242 21:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:09.242 21:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:09.242 21:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:09.242 21:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.242 21:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:09.242 21:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.208 21:55:19 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:11.208 00:30:11.208 real 0m46.591s 00:30:11.208 user 1m4.391s 00:30:11.208 sys 0m15.403s 00:30:11.208 21:55:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:11.208 21:55:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:11.208 ************************************ 00:30:11.208 END TEST nvmf_abort_qd_sizes 00:30:11.208 ************************************ 00:30:11.208 21:55:19 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:11.208 21:55:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:11.208 21:55:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:11.208 21:55:19 -- common/autotest_common.sh@10 -- # set +x 00:30:11.467 ************************************ 00:30:11.467 START TEST keyring_file 00:30:11.467 ************************************ 00:30:11.467 21:55:19 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:11.467 * Looking for test storage... 00:30:11.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:11.467 21:55:19 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.467 21:55:19 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.467 21:55:19 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.467 21:55:19 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.467 21:55:19 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.467 21:55:19 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.467 21:55:19 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.467 21:55:19 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:11.467 21:55:19 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:11.467 21:55:19 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:11.467 21:55:19 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:11.467 21:55:19 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:11.467 21:55:19 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:11.467 21:55:19 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:11.467 21:55:19 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oLYA6YCEFs 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oLYA6YCEFs 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oLYA6YCEFs 00:30:11.467 21:55:19 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.oLYA6YCEFs 00:30:11.467 21:55:19 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UzcS5gKT1b 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:11.467 21:55:19 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UzcS5gKT1b 00:30:11.467 21:55:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UzcS5gKT1b 00:30:11.467 21:55:19 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.UzcS5gKT1b 00:30:11.467 21:55:19 keyring_file -- keyring/file.sh@30 -- # tgtpid=3255028 00:30:11.467 21:55:19 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3255028 00:30:11.467 21:55:19 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:11.467 21:55:19 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3255028 ']' 00:30:11.467 21:55:19 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.467 21:55:19 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:11.467 21:55:19 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.467 21:55:19 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:11.467 21:55:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:11.727 [2024-07-24 21:55:19.608020] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:30:11.727 [2024-07-24 21:55:19.608078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255028 ] 00:30:11.727 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.727 [2024-07-24 21:55:19.662830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.727 [2024-07-24 21:55:19.737368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.294 21:55:20 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:12.294 21:55:20 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:12.294 21:55:20 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:12.294 21:55:20 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.294 21:55:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:12.552 [2024-07-24 21:55:20.412396] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.552 null0 00:30:12.552 [2024-07-24 21:55:20.444446] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:12.552 [2024-07-24 21:55:20.444673] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:12.552 [2024-07-24 21:55:20.452456] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:12.552 21:55:20 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.552 21:55:20 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:12.552 21:55:20 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:12.552 21:55:20 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:12.552 21:55:20 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:12.552 21:55:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:12.552 21:55:20 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:12.552 21:55:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:12.552 21:55:20 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:12.553 21:55:20 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.553 21:55:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:12.553 [2024-07-24 21:55:20.464491] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:12.553 request: 00:30:12.553 { 00:30:12.553 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:12.553 "secure_channel": false, 00:30:12.553 "listen_address": { 00:30:12.553 "trtype": "tcp", 00:30:12.553 "traddr": "127.0.0.1", 00:30:12.553 "trsvcid": "4420" 00:30:12.553 }, 00:30:12.553 "method": "nvmf_subsystem_add_listener", 00:30:12.553 "req_id": 1 00:30:12.553 } 00:30:12.553 Got JSON-RPC error response 00:30:12.553 response: 00:30:12.553 { 00:30:12.553 "code": -32602, 00:30:12.553 "message": "Invalid parameters" 00:30:12.553 } 00:30:12.553 21:55:20 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:12.553 21:55:20 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:12.553 21:55:20 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:12.553 21:55:20 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:12.553 21:55:20 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:12.553 21:55:20 keyring_file -- keyring/file.sh@46 -- # bperfpid=3255188 00:30:12.553 21:55:20 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3255188 /var/tmp/bperf.sock 00:30:12.553 21:55:20 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3255188 ']' 00:30:12.553 21:55:20 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:12.553 21:55:20 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:12.553 21:55:20 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:12.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:12.553 21:55:20 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:12.553 21:55:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:12.553 21:55:20 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:12.553 [2024-07-24 21:55:20.512028] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:30:12.553 [2024-07-24 21:55:20.512074] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255188 ] 00:30:12.553 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.553 [2024-07-24 21:55:20.565428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.553 [2024-07-24 21:55:20.644067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.487 21:55:21 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:13.487 21:55:21 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:13.487 21:55:21 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oLYA6YCEFs 00:30:13.487 21:55:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oLYA6YCEFs 00:30:13.487 21:55:21 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.UzcS5gKT1b 00:30:13.487 21:55:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.UzcS5gKT1b 00:30:13.745 21:55:21 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:13.745 21:55:21 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:13.745 21:55:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:13.745 21:55:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:13.745 21:55:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:13.745 21:55:21 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.oLYA6YCEFs == \/\t\m\p\/\t\m\p\.\o\L\Y\A\6\Y\C\E\F\s ]] 00:30:13.745 21:55:21 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:13.745 21:55:21 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:13.745 21:55:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:13.745 21:55:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:13.745 21:55:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:14.003 21:55:21 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.UzcS5gKT1b == \/\t\m\p\/\t\m\p\.\U\z\c\S\5\g\K\T\1\b ]] 00:30:14.003 21:55:21 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:14.003 21:55:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:14.003 21:55:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:14.003 21:55:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.003 21:55:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.003 21:55:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:14.262 21:55:22 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:14.262 21:55:22 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:14.262 21:55:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:14.262 21:55:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:14.262 21:55:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.262 21:55:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:14.262 21:55:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.262 21:55:22 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:14.262 21:55:22 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:14.262 21:55:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:14.520 [2024-07-24 21:55:22.501046] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:14.520 nvme0n1 00:30:14.520 21:55:22 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:14.520 21:55:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:14.520 21:55:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:14.520 21:55:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.520 21:55:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:14.520 21:55:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.779 21:55:22 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:14.779 21:55:22 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:14.779 21:55:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:14.779 21:55:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:14.779 21:55:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.779 21:55:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.779 21:55:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:15.038 21:55:22 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:15.038 21:55:22 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:15.038 Running I/O for 1 seconds... 00:30:15.973 00:30:15.973 Latency(us) 00:30:15.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.973 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:15.973 nvme0n1 : 1.02 3720.21 14.53 0.00 0.00 34066.63 9516.97 57443.73 00:30:15.973 =================================================================================================================== 00:30:15.973 Total : 3720.21 14.53 0.00 0.00 34066.63 9516.97 57443.73 00:30:15.973 0 00:30:15.973 21:55:24 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:15.973 21:55:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:16.231 21:55:24 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:16.231 21:55:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:16.231 21:55:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:16.231 21:55:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:16.231 21:55:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:16.231 21:55:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:16.489 21:55:24 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:16.489 21:55:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:16.489 21:55:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:16.489 21:55:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:16.489 21:55:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:16.489 21:55:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:16.489 21:55:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:16.748 21:55:24 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:16.748 21:55:24 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:16.748 21:55:24 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:16.748 21:55:24 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:16.748 21:55:24 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:16.748 21:55:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:16.748 21:55:24 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:16.748 21:55:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:16.748 21:55:24 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:16.748 21:55:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:16.748 [2024-07-24 21:55:24.777021] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:16.748 [2024-07-24 21:55:24.777448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf820 (107): Transport endpoint is not connected 00:30:16.748 [2024-07-24 21:55:24.778444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf820 (9): Bad file descriptor 00:30:16.748 [2024-07-24 21:55:24.779443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:16.748 [2024-07-24 21:55:24.779452] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:16.748 [2024-07-24 21:55:24.779460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:16.748 request: 00:30:16.748 { 00:30:16.748 "name": "nvme0", 00:30:16.748 "trtype": "tcp", 00:30:16.748 "traddr": "127.0.0.1", 00:30:16.748 "adrfam": "ipv4", 00:30:16.748 "trsvcid": "4420", 00:30:16.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:16.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:16.748 "prchk_reftag": false, 00:30:16.748 "prchk_guard": false, 00:30:16.748 "hdgst": false, 00:30:16.748 "ddgst": false, 00:30:16.748 "psk": "key1", 00:30:16.748 "method": "bdev_nvme_attach_controller", 00:30:16.748 "req_id": 1 00:30:16.748 } 00:30:16.748 Got JSON-RPC error response 00:30:16.748 response: 00:30:16.748 { 00:30:16.748 "code": -5, 00:30:16.748 "message": "Input/output error" 00:30:16.748 } 00:30:16.748 21:55:24 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:16.748 21:55:24 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:16.748 21:55:24 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:16.748 21:55:24 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:16.748 21:55:24 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:16.748 21:55:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:16.748 21:55:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:16.748 21:55:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:16.748 21:55:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:16.748 21:55:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:17.006 21:55:24 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:17.006 21:55:24 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:17.006 21:55:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:17.006 21:55:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:17.006 21:55:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:17.006 21:55:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:17.006 21:55:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:17.263 21:55:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:17.263 21:55:25 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:17.263 21:55:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:17.263 21:55:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:17.263 21:55:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:17.522 21:55:25 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:17.522 21:55:25 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:17.522 21:55:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:17.780 21:55:25 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:17.780 21:55:25 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.oLYA6YCEFs 00:30:17.780 21:55:25 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.oLYA6YCEFs 00:30:17.780 21:55:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:17.780 21:55:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.oLYA6YCEFs 00:30:17.780 21:55:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:17.780 21:55:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:17.780 21:55:25 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:17.780 21:55:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:17.780 21:55:25 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oLYA6YCEFs 00:30:17.780 21:55:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oLYA6YCEFs 00:30:17.780 [2024-07-24 21:55:25.841322] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.oLYA6YCEFs': 0100660 00:30:17.780 [2024-07-24 21:55:25.841344] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:17.780 request: 00:30:17.780 { 00:30:17.780 "name": "key0", 00:30:17.780 "path": "/tmp/tmp.oLYA6YCEFs", 00:30:17.780 "method": "keyring_file_add_key", 00:30:17.780 "req_id": 1 00:30:17.780 } 00:30:17.780 Got JSON-RPC error response 00:30:17.780 response: 00:30:17.780 { 00:30:17.780 "code": -1, 00:30:17.780 "message": "Operation not permitted" 00:30:17.780 } 00:30:17.780 21:55:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:17.780 21:55:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:17.780 21:55:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:17.780 21:55:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:17.780 21:55:25 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.oLYA6YCEFs 00:30:17.780 21:55:25 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oLYA6YCEFs 00:30:17.780 21:55:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oLYA6YCEFs 00:30:18.038 21:55:26 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.oLYA6YCEFs 00:30:18.038 21:55:26 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:18.038 21:55:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:18.038 21:55:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:18.038 21:55:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:18.038 21:55:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:18.038 21:55:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:18.295 21:55:26 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:18.295 21:55:26 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.295 21:55:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:18.295 21:55:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.295 21:55:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:18.295 21:55:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:18.295 21:55:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:18.295 21:55:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:18.295 21:55:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.295 21:55:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.295 [2024-07-24 21:55:26.370731] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.oLYA6YCEFs': No such file or directory 00:30:18.295 [2024-07-24 21:55:26.370751] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:18.295 [2024-07-24 21:55:26.370770] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:18.295 [2024-07-24 21:55:26.370775] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:18.295 [2024-07-24 21:55:26.370782] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:18.295 request: 00:30:18.295 { 00:30:18.295 "name": "nvme0", 00:30:18.295 "trtype": "tcp", 00:30:18.295 "traddr": "127.0.0.1", 00:30:18.295 "adrfam": "ipv4", 00:30:18.295 "trsvcid": "4420", 00:30:18.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:18.295 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:18.295 "prchk_reftag": false, 00:30:18.295 "prchk_guard": false, 00:30:18.295 "hdgst": false, 00:30:18.295 "ddgst": false, 00:30:18.295 "psk": "key0", 00:30:18.295 "method": "bdev_nvme_attach_controller", 00:30:18.295 "req_id": 1 00:30:18.295 } 00:30:18.295 Got JSON-RPC error response 00:30:18.295 response: 00:30:18.295 { 00:30:18.295 "code": -19, 00:30:18.295 "message": "No such device" 00:30:18.296 } 00:30:18.296 21:55:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:18.296 21:55:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:18.296 21:55:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:18.296 21:55:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:18.296 21:55:26 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:18.296 21:55:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:18.552 21:55:26 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:18.552 21:55:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:18.552 21:55:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:18.552 21:55:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:18.552 21:55:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:18.552 21:55:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:18.552 21:55:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8vCgFUz2pt 00:30:18.552 21:55:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:18.552 21:55:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:18.552 21:55:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:18.552 21:55:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:18.552 21:55:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:18.552 21:55:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:18.552 21:55:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:18.552 21:55:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8vCgFUz2pt 00:30:18.552 21:55:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8vCgFUz2pt 00:30:18.552 21:55:26 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.8vCgFUz2pt 00:30:18.552 21:55:26 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8vCgFUz2pt 00:30:18.552 21:55:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8vCgFUz2pt 00:30:18.809 21:55:26 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.809 21:55:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:19.068 nvme0n1 00:30:19.068 21:55:27 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:19.068 21:55:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:19.068 21:55:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:19.068 21:55:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:19.068 21:55:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:19.068 21:55:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.326 21:55:27 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:19.326 21:55:27 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:19.326 21:55:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:19.326 21:55:27 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:19.326 21:55:27 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:19.326 21:55:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:19.326 21:55:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:19.326 21:55:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.584 21:55:27 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:19.584 21:55:27 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:19.584 21:55:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:19.584 21:55:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:19.584 21:55:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:19.584 21:55:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:19.584 21:55:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.842 21:55:27 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:19.842 21:55:27 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:19.842 21:55:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:19.842 21:55:27 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:19.842 21:55:27 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:19.842 21:55:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:20.099 21:55:28 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:20.100 21:55:28 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8vCgFUz2pt 00:30:20.100 21:55:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8vCgFUz2pt 00:30:20.358 21:55:28 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.UzcS5gKT1b 00:30:20.358 21:55:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.UzcS5gKT1b 00:30:20.358 21:55:28 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:20.358 21:55:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:20.617 nvme0n1 00:30:20.617 21:55:28 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:20.617 21:55:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:20.876 21:55:28 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:20.876 "subsystems": [ 00:30:20.876 { 00:30:20.876 "subsystem": "keyring", 00:30:20.876 "config": [ 00:30:20.876 { 00:30:20.876 "method": "keyring_file_add_key", 00:30:20.876 "params": { 00:30:20.876 "name": "key0", 00:30:20.876 "path": "/tmp/tmp.8vCgFUz2pt" 00:30:20.876 } 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "method": "keyring_file_add_key", 00:30:20.876 "params": { 00:30:20.876 "name": "key1", 00:30:20.876 "path": "/tmp/tmp.UzcS5gKT1b" 00:30:20.876 } 00:30:20.876 } 00:30:20.876 ] 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "subsystem": "iobuf", 00:30:20.876 "config": [ 00:30:20.876 { 00:30:20.876 "method": "iobuf_set_options", 00:30:20.876 "params": { 00:30:20.876 "small_pool_count": 8192, 00:30:20.876 "large_pool_count": 1024, 00:30:20.876 "small_bufsize": 8192, 00:30:20.876 "large_bufsize": 135168 00:30:20.876 } 00:30:20.876 } 00:30:20.876 ] 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "subsystem": "sock", 00:30:20.876 "config": [ 00:30:20.876 { 00:30:20.876 "method": "sock_set_default_impl", 00:30:20.876 "params": { 00:30:20.876 "impl_name": "posix" 00:30:20.876 } 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "method": "sock_impl_set_options", 00:30:20.876 "params": { 00:30:20.876 "impl_name": "ssl", 00:30:20.876 "recv_buf_size": 4096, 00:30:20.876 "send_buf_size": 4096, 00:30:20.876 "enable_recv_pipe": true, 00:30:20.876 "enable_quickack": false, 00:30:20.876 "enable_placement_id": 0, 00:30:20.876 "enable_zerocopy_send_server": true, 00:30:20.876 "enable_zerocopy_send_client": false, 00:30:20.876 "zerocopy_threshold": 0, 00:30:20.876 "tls_version": 0, 00:30:20.876 "enable_ktls": false 00:30:20.876 } 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "method": "sock_impl_set_options", 00:30:20.876 "params": { 00:30:20.876 "impl_name": "posix", 00:30:20.876 "recv_buf_size": 2097152, 00:30:20.876 "send_buf_size": 2097152, 00:30:20.876 "enable_recv_pipe": true, 00:30:20.876 "enable_quickack": false, 00:30:20.876 "enable_placement_id": 0, 00:30:20.876 "enable_zerocopy_send_server": true, 00:30:20.876 "enable_zerocopy_send_client": false, 00:30:20.876 "zerocopy_threshold": 0, 00:30:20.876 "tls_version": 0, 00:30:20.876 "enable_ktls": false 00:30:20.876 } 00:30:20.876 } 00:30:20.876 ] 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "subsystem": "vmd", 00:30:20.876 "config": [] 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "subsystem": "accel", 00:30:20.876 "config": [ 00:30:20.876 { 00:30:20.876 "method": "accel_set_options", 00:30:20.876 "params": { 00:30:20.876 "small_cache_size": 128, 00:30:20.876 "large_cache_size": 16, 00:30:20.876 "task_count": 2048, 00:30:20.876 "sequence_count": 2048, 00:30:20.876 "buf_count": 2048 00:30:20.876 } 00:30:20.876 } 00:30:20.876 ] 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "subsystem": "bdev", 00:30:20.876 "config": [ 00:30:20.876 { 00:30:20.876 "method": "bdev_set_options", 00:30:20.876 "params": { 00:30:20.876 "bdev_io_pool_size": 65535, 00:30:20.876 "bdev_io_cache_size": 256, 00:30:20.876 "bdev_auto_examine": true, 00:30:20.876 "iobuf_small_cache_size": 128, 00:30:20.876 "iobuf_large_cache_size": 16 00:30:20.876 } 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "method": "bdev_raid_set_options", 00:30:20.876 "params": { 00:30:20.876 "process_window_size_kb": 1024, 00:30:20.876 "process_max_bandwidth_mb_sec": 0 00:30:20.876 } 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "method": "bdev_iscsi_set_options", 00:30:20.876 "params": { 00:30:20.876 "timeout_sec": 30 00:30:20.876 } 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "method": "bdev_nvme_set_options", 00:30:20.876 "params": { 00:30:20.876 "action_on_timeout": "none", 00:30:20.876 "timeout_us": 0, 00:30:20.876 "timeout_admin_us": 0, 00:30:20.876 "keep_alive_timeout_ms": 10000, 00:30:20.876 "arbitration_burst": 0, 00:30:20.876 "low_priority_weight": 0, 00:30:20.876 "medium_priority_weight": 0, 00:30:20.876 "high_priority_weight": 0, 00:30:20.876 "nvme_adminq_poll_period_us": 10000, 00:30:20.876 "nvme_ioq_poll_period_us": 0, 00:30:20.876 "io_queue_requests": 512, 00:30:20.876 "delay_cmd_submit": true, 00:30:20.876 "transport_retry_count": 4, 00:30:20.876 "bdev_retry_count": 3, 00:30:20.876 "transport_ack_timeout": 0, 00:30:20.876 "ctrlr_loss_timeout_sec": 0, 00:30:20.876 "reconnect_delay_sec": 0, 00:30:20.876 "fast_io_fail_timeout_sec": 0, 00:30:20.876 "disable_auto_failback": false, 00:30:20.876 "generate_uuids": false, 00:30:20.876 "transport_tos": 0, 00:30:20.876 "nvme_error_stat": false, 00:30:20.876 "rdma_srq_size": 0, 00:30:20.876 "io_path_stat": false, 00:30:20.876 "allow_accel_sequence": false, 00:30:20.876 "rdma_max_cq_size": 0, 00:30:20.876 "rdma_cm_event_timeout_ms": 0, 00:30:20.876 "dhchap_digests": [ 00:30:20.876 "sha256", 00:30:20.876 "sha384", 00:30:20.876 "sha512" 00:30:20.876 ], 00:30:20.876 "dhchap_dhgroups": [ 00:30:20.876 "null", 00:30:20.876 "ffdhe2048", 00:30:20.876 "ffdhe3072", 00:30:20.876 "ffdhe4096", 00:30:20.876 "ffdhe6144", 00:30:20.876 "ffdhe8192" 00:30:20.876 ] 00:30:20.876 } 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "method": "bdev_nvme_attach_controller", 00:30:20.876 "params": { 00:30:20.876 "name": "nvme0", 00:30:20.876 "trtype": "TCP", 00:30:20.876 "adrfam": "IPv4", 00:30:20.876 "traddr": "127.0.0.1", 00:30:20.876 "trsvcid": "4420", 00:30:20.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:20.876 "prchk_reftag": false, 00:30:20.876 "prchk_guard": false, 00:30:20.876 "ctrlr_loss_timeout_sec": 0, 00:30:20.876 "reconnect_delay_sec": 0, 00:30:20.876 "fast_io_fail_timeout_sec": 0, 00:30:20.876 "psk": "key0", 00:30:20.876 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:20.876 "hdgst": false, 00:30:20.876 "ddgst": false 00:30:20.876 } 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "method": "bdev_nvme_set_hotplug", 00:30:20.876 "params": { 00:30:20.876 "period_us": 100000, 00:30:20.876 "enable": false 00:30:20.876 } 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "method": "bdev_wait_for_examine" 00:30:20.876 } 00:30:20.876 ] 00:30:20.876 }, 00:30:20.876 { 00:30:20.876 "subsystem": "nbd", 00:30:20.876 "config": [] 00:30:20.876 } 00:30:20.876 ] 00:30:20.876 }' 00:30:20.876 21:55:28 keyring_file -- keyring/file.sh@114 -- # killprocess 3255188 00:30:20.876 21:55:28 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3255188 ']' 00:30:20.876 21:55:28 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3255188 00:30:20.877 21:55:28 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:20.877 21:55:28 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:20.877 21:55:28 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3255188 00:30:20.877 21:55:28 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:20.877 21:55:28 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:20.877 21:55:28 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3255188' 00:30:20.877 killing process with pid 3255188 00:30:20.877 21:55:28 keyring_file -- common/autotest_common.sh@967 -- # kill 3255188 00:30:20.877 Received shutdown signal, test time was about 1.000000 seconds 00:30:20.877 00:30:20.877 Latency(us) 00:30:20.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.877 =================================================================================================================== 00:30:20.877 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:20.877 21:55:28 keyring_file -- common/autotest_common.sh@972 -- # wait 3255188 00:30:21.136 21:55:29 keyring_file -- keyring/file.sh@117 -- # bperfpid=3256710 00:30:21.136 21:55:29 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3256710 /var/tmp/bperf.sock 00:30:21.136 21:55:29 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3256710 ']' 00:30:21.136 21:55:29 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:21.136 21:55:29 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:21.136 21:55:29 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:21.136 21:55:29 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:21.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:21.136 21:55:29 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:21.136 "subsystems": [ 00:30:21.136 { 00:30:21.136 "subsystem": "keyring", 00:30:21.136 "config": [ 00:30:21.136 { 00:30:21.136 "method": "keyring_file_add_key", 00:30:21.136 "params": { 00:30:21.136 "name": "key0", 00:30:21.136 "path": "/tmp/tmp.8vCgFUz2pt" 00:30:21.136 } 00:30:21.136 }, 00:30:21.136 { 00:30:21.136 "method": "keyring_file_add_key", 00:30:21.136 "params": { 00:30:21.136 "name": "key1", 00:30:21.136 "path": "/tmp/tmp.UzcS5gKT1b" 00:30:21.136 } 00:30:21.136 } 00:30:21.136 ] 00:30:21.136 }, 00:30:21.136 { 00:30:21.136 "subsystem": "iobuf", 00:30:21.136 "config": [ 00:30:21.136 { 00:30:21.136 "method": "iobuf_set_options", 00:30:21.136 "params": { 00:30:21.136 "small_pool_count": 8192, 00:30:21.136 "large_pool_count": 1024, 00:30:21.136 "small_bufsize": 8192, 00:30:21.136 "large_bufsize": 135168 00:30:21.136 } 00:30:21.136 } 00:30:21.136 ] 00:30:21.136 }, 00:30:21.136 { 00:30:21.136 "subsystem": "sock", 00:30:21.136 "config": [ 00:30:21.136 { 00:30:21.136 "method": "sock_set_default_impl", 00:30:21.136 "params": { 00:30:21.136 "impl_name": "posix" 00:30:21.136 } 00:30:21.136 }, 00:30:21.136 { 00:30:21.136 "method": "sock_impl_set_options", 00:30:21.136 "params": { 00:30:21.136 "impl_name": "ssl", 00:30:21.136 "recv_buf_size": 4096, 00:30:21.136 "send_buf_size": 4096, 00:30:21.136 "enable_recv_pipe": true, 00:30:21.136 "enable_quickack": false, 00:30:21.136 "enable_placement_id": 0, 00:30:21.136 "enable_zerocopy_send_server": true, 00:30:21.136 "enable_zerocopy_send_client": false, 00:30:21.136 "zerocopy_threshold": 0, 00:30:21.136 "tls_version": 0, 00:30:21.136 "enable_ktls": false 00:30:21.136 } 00:30:21.136 }, 00:30:21.136 { 00:30:21.136 "method": "sock_impl_set_options", 00:30:21.136 "params": { 00:30:21.136 "impl_name": "posix", 00:30:21.136 "recv_buf_size": 2097152, 00:30:21.136 "send_buf_size": 2097152, 00:30:21.136 "enable_recv_pipe": true, 00:30:21.136 "enable_quickack": false, 00:30:21.136 "enable_placement_id": 0, 00:30:21.136 "enable_zerocopy_send_server": true, 00:30:21.136 "enable_zerocopy_send_client": false, 00:30:21.136 "zerocopy_threshold": 0, 00:30:21.136 "tls_version": 0, 00:30:21.136 "enable_ktls": false 00:30:21.136 } 00:30:21.136 } 00:30:21.136 ] 00:30:21.136 }, 00:30:21.136 { 00:30:21.136 "subsystem": "vmd", 00:30:21.136 "config": [] 00:30:21.136 }, 00:30:21.136 { 00:30:21.136 "subsystem": "accel", 00:30:21.136 "config": [ 00:30:21.136 { 00:30:21.136 "method": "accel_set_options", 00:30:21.136 "params": { 00:30:21.136 "small_cache_size": 128, 00:30:21.136 "large_cache_size": 16, 00:30:21.136 "task_count": 2048, 00:30:21.136 "sequence_count": 2048, 00:30:21.136 "buf_count": 2048 00:30:21.136 } 00:30:21.136 } 00:30:21.136 ] 00:30:21.136 }, 00:30:21.136 { 00:30:21.136 "subsystem": "bdev", 00:30:21.136 "config": [ 00:30:21.136 { 00:30:21.136 "method": "bdev_set_options", 00:30:21.136 "params": { 00:30:21.136 "bdev_io_pool_size": 65535, 00:30:21.136 "bdev_io_cache_size": 256, 00:30:21.136 "bdev_auto_examine": true, 00:30:21.136 "iobuf_small_cache_size": 128, 00:30:21.136 "iobuf_large_cache_size": 16 00:30:21.136 } 00:30:21.136 }, 00:30:21.136 { 00:30:21.136 "method": "bdev_raid_set_options", 00:30:21.136 "params": { 00:30:21.136 "process_window_size_kb": 1024, 00:30:21.136 "process_max_bandwidth_mb_sec": 0 00:30:21.136 } 00:30:21.136 }, 00:30:21.136 { 00:30:21.136 "method": "bdev_iscsi_set_options", 00:30:21.136 "params": { 00:30:21.136 "timeout_sec": 30 00:30:21.136 } 00:30:21.136 }, 00:30:21.136 { 00:30:21.136 "method": "bdev_nvme_set_options", 00:30:21.136 "params": { 00:30:21.136 "action_on_timeout": "none", 00:30:21.137 "timeout_us": 0, 00:30:21.137 "timeout_admin_us": 0, 00:30:21.137 "keep_alive_timeout_ms": 10000, 00:30:21.137 "arbitration_burst": 0, 00:30:21.137 "low_priority_weight": 0, 00:30:21.137 "medium_priority_weight": 0, 00:30:21.137 "high_priority_weight": 0, 00:30:21.137 "nvme_adminq_poll_period_us": 10000, 00:30:21.137 "nvme_ioq_poll_period_us": 0, 00:30:21.137 "io_queue_requests": 512, 00:30:21.137 "delay_cmd_submit": true, 00:30:21.137 "transport_retry_count": 4, 00:30:21.137 "bdev_retry_count": 3, 00:30:21.137 "transport_ack_timeout": 0, 00:30:21.137 "ctrlr_loss_timeout_sec": 0, 00:30:21.137 "reconnect_delay_sec": 0, 00:30:21.137 "fast_io_fail_timeout_sec": 0, 00:30:21.137 "disable_auto_failback": false, 00:30:21.137 "generate_uuids": false, 00:30:21.137 "transport_tos": 0, 00:30:21.137 "nvme_error_stat": false, 00:30:21.137 "rdma_srq_size": 0, 00:30:21.137 "io_path_stat": false, 00:30:21.137 "allow_accel_sequence": false, 00:30:21.137 "rdma_max_cq_size": 0, 00:30:21.137 "rdma_cm_event_timeout_ms": 0, 00:30:21.137 "dhchap_digests": [ 00:30:21.137 "sha256", 00:30:21.137 "sha384", 00:30:21.137 "sha512" 00:30:21.137 ], 00:30:21.137 "dhchap_dhgroups": [ 00:30:21.137 "null", 00:30:21.137 "ffdhe2048", 00:30:21.137 "ffdhe3072", 00:30:21.137 "ffdhe4096", 00:30:21.137 "ffdhe6144", 00:30:21.137 "ffdhe8192" 00:30:21.137 ] 00:30:21.137 } 00:30:21.137 }, 00:30:21.137 { 00:30:21.137 "method": "bdev_nvme_attach_controller", 00:30:21.137 "params": { 00:30:21.137 "name": "nvme0", 00:30:21.137 "trtype": "TCP", 00:30:21.137 "adrfam": "IPv4", 00:30:21.137 "traddr": "127.0.0.1", 00:30:21.137 "trsvcid": "4420", 00:30:21.137 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:21.137 "prchk_reftag": false, 00:30:21.137 "prchk_guard": false, 00:30:21.137 "ctrlr_loss_timeout_sec": 0, 00:30:21.137 "reconnect_delay_sec": 0, 00:30:21.137 "fast_io_fail_timeout_sec": 0, 00:30:21.137 "psk": "key0", 00:30:21.137 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:21.137 "hdgst": false, 00:30:21.137 "ddgst": false 00:30:21.137 } 00:30:21.137 }, 00:30:21.137 { 00:30:21.137 "method": "bdev_nvme_set_hotplug", 00:30:21.137 "params": { 00:30:21.137 "period_us": 100000, 00:30:21.137 "enable": false 00:30:21.137 } 00:30:21.137 }, 00:30:21.137 { 00:30:21.137 "method": "bdev_wait_for_examine" 00:30:21.137 } 00:30:21.137 ] 00:30:21.137 }, 00:30:21.137 { 00:30:21.137 "subsystem": "nbd", 00:30:21.137 "config": [] 00:30:21.137 } 00:30:21.137 ] 00:30:21.137 }' 00:30:21.137 21:55:29 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:21.137 21:55:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:21.137 [2024-07-24 21:55:29.190777] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:30:21.137 [2024-07-24 21:55:29.190823] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256710 ] 00:30:21.137 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.137 [2024-07-24 21:55:29.243167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.396 [2024-07-24 21:55:29.324177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.396 [2024-07-24 21:55:29.482277] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:21.963 21:55:29 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:21.963 21:55:29 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:21.963 21:55:29 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:21.963 21:55:29 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:21.963 21:55:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:22.222 21:55:30 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:22.222 21:55:30 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:22.222 21:55:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:22.222 21:55:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:22.222 21:55:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:22.222 21:55:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:22.222 21:55:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:22.480 21:55:30 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:22.480 21:55:30 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:22.480 21:55:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:22.480 21:55:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:22.480 21:55:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:22.480 21:55:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:22.480 21:55:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:22.480 21:55:30 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:22.480 21:55:30 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:22.480 21:55:30 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:22.480 21:55:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:22.738 21:55:30 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:22.738 21:55:30 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:22.738 21:55:30 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.8vCgFUz2pt /tmp/tmp.UzcS5gKT1b 00:30:22.738 21:55:30 keyring_file -- keyring/file.sh@20 -- # killprocess 3256710 00:30:22.738 21:55:30 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3256710 ']' 00:30:22.738 21:55:30 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3256710 00:30:22.738 21:55:30 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:22.738 21:55:30 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:22.738 21:55:30 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3256710 00:30:22.738 21:55:30 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:22.738 21:55:30 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:22.738 21:55:30 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3256710' 00:30:22.738 killing process with pid 3256710 00:30:22.738 21:55:30 keyring_file -- common/autotest_common.sh@967 -- # kill 3256710 00:30:22.738 Received shutdown signal, test time was about 1.000000 seconds 00:30:22.738 00:30:22.738 Latency(us) 00:30:22.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.738 =================================================================================================================== 00:30:22.738 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:22.738 21:55:30 keyring_file -- common/autotest_common.sh@972 -- # wait 3256710 00:30:22.996 21:55:30 keyring_file -- keyring/file.sh@21 -- # killprocess 3255028 00:30:22.996 21:55:30 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3255028 ']' 00:30:22.996 21:55:30 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3255028 00:30:22.996 21:55:30 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:22.996 21:55:30 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:22.997 21:55:30 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3255028 00:30:22.997 21:55:30 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:22.997 21:55:30 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:22.997 21:55:30 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3255028' 00:30:22.997 killing process with pid 3255028 00:30:22.997 21:55:30 keyring_file -- common/autotest_common.sh@967 -- # kill 3255028 00:30:22.997 [2024-07-24 21:55:30.976309] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:22.997 21:55:30 keyring_file -- common/autotest_common.sh@972 -- # wait 3255028 00:30:23.255 00:30:23.255 real 0m11.960s 00:30:23.255 user 0m27.894s 00:30:23.255 sys 0m2.682s 00:30:23.255 21:55:31 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:23.255 21:55:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:23.255 ************************************ 00:30:23.255 END TEST keyring_file 00:30:23.255 ************************************ 00:30:23.255 21:55:31 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:30:23.255 21:55:31 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:23.255 21:55:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:23.255 21:55:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:23.255 21:55:31 -- common/autotest_common.sh@10 -- # set +x 00:30:23.255 ************************************ 00:30:23.255 START TEST keyring_linux 00:30:23.255 ************************************ 00:30:23.255 21:55:31 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:23.536 * Looking for test storage... 00:30:23.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:23.536 21:55:31 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:23.536 21:55:31 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.536 21:55:31 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.536 21:55:31 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.536 21:55:31 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.536 21:55:31 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.536 21:55:31 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.536 21:55:31 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.536 21:55:31 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:23.536 21:55:31 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:23.536 21:55:31 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:23.537 21:55:31 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:23.537 21:55:31 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:23.537 21:55:31 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:23.537 21:55:31 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:23.537 21:55:31 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:23.537 21:55:31 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:23.537 21:55:31 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:23.537 21:55:31 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:23.537 21:55:31 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:23.537 21:55:31 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:23.537 21:55:31 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:23.537 21:55:31 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:23.537 /tmp/:spdk-test:key0 00:30:23.537 21:55:31 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:23.537 21:55:31 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:23.537 21:55:31 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:23.537 21:55:31 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:23.537 21:55:31 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:23.537 21:55:31 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:23.537 21:55:31 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:23.537 21:55:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:23.537 /tmp/:spdk-test:key1 00:30:23.537 21:55:31 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3257250 00:30:23.537 21:55:31 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3257250 00:30:23.537 21:55:31 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:23.537 21:55:31 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3257250 ']' 00:30:23.537 21:55:31 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.537 21:55:31 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:23.537 21:55:31 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.537 21:55:31 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:23.537 21:55:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:23.537 [2024-07-24 21:55:31.607614] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:30:23.537 [2024-07-24 21:55:31.607665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257250 ] 00:30:23.537 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.801 [2024-07-24 21:55:31.661475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.801 [2024-07-24 21:55:31.736158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.367 21:55:32 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:24.367 21:55:32 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:24.367 21:55:32 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:24.367 21:55:32 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.367 21:55:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:24.367 [2024-07-24 21:55:32.407997] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.367 null0 00:30:24.367 [2024-07-24 21:55:32.440065] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:24.367 [2024-07-24 21:55:32.440390] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:24.368 21:55:32 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.368 21:55:32 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:24.368 608985480 00:30:24.368 21:55:32 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:24.368 366185960 00:30:24.368 21:55:32 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3257276 00:30:24.368 21:55:32 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3257276 /var/tmp/bperf.sock 00:30:24.368 21:55:32 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:24.368 21:55:32 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3257276 ']' 00:30:24.368 21:55:32 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:24.368 21:55:32 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:24.368 21:55:32 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:24.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:24.368 21:55:32 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:24.368 21:55:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:24.625 [2024-07-24 21:55:32.513969] Starting SPDK v24.09-pre git sha1 6b560eac9 / DPDK 24.03.0 initialization... 00:30:24.625 [2024-07-24 21:55:32.514019] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257276 ] 00:30:24.625 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.625 [2024-07-24 21:55:32.568157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.625 [2024-07-24 21:55:32.647785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.559 21:55:33 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:25.559 21:55:33 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:25.559 21:55:33 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:25.559 21:55:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:25.559 21:55:33 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:25.559 21:55:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:25.817 21:55:33 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:25.817 21:55:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:25.817 [2024-07-24 21:55:33.887330] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:26.076 nvme0n1 00:30:26.076 21:55:33 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:26.076 21:55:33 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:26.076 21:55:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:26.076 21:55:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:26.076 21:55:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:26.076 21:55:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:26.076 21:55:34 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:26.076 21:55:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:26.076 21:55:34 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:26.076 21:55:34 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:26.076 21:55:34 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:26.076 21:55:34 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:26.076 21:55:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:26.334 21:55:34 keyring_linux -- keyring/linux.sh@25 -- # sn=608985480 00:30:26.334 21:55:34 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:26.334 21:55:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:26.334 21:55:34 keyring_linux -- keyring/linux.sh@26 -- # [[ 608985480 == \6\0\8\9\8\5\4\8\0 ]] 00:30:26.334 21:55:34 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 608985480 00:30:26.334 21:55:34 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:26.334 21:55:34 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:26.334 Running I/O for 1 seconds... 00:30:27.707 00:30:27.707 Latency(us) 00:30:27.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.707 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:27.707 nvme0n1 : 1.03 3041.98 11.88 0.00 0.00 41496.69 9858.89 53568.56 00:30:27.707 =================================================================================================================== 00:30:27.707 Total : 3041.98 11.88 0.00 0.00 41496.69 9858.89 53568.56 00:30:27.707 0 00:30:27.707 21:55:35 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:27.707 21:55:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:27.707 21:55:35 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:27.707 21:55:35 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:27.707 21:55:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:27.707 21:55:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:27.707 21:55:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:27.707 21:55:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:27.965 21:55:35 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:27.965 21:55:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:27.965 21:55:35 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:27.965 21:55:35 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:27.965 21:55:35 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:30:27.965 21:55:35 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:27.965 21:55:35 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:27.965 21:55:35 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:27.965 21:55:35 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:27.965 21:55:35 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:27.965 21:55:35 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:27.965 21:55:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:27.965 [2024-07-24 21:55:35.999956] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:27.965 [2024-07-24 21:55:36.000738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcab770 (107): Transport endpoint is not connected 00:30:27.965 [2024-07-24 21:55:36.001733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcab770 (9): Bad file descriptor 00:30:27.965 [2024-07-24 21:55:36.002731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:27.965 [2024-07-24 21:55:36.002747] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:27.965 [2024-07-24 21:55:36.002754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:27.965 request: 00:30:27.965 { 00:30:27.965 "name": "nvme0", 00:30:27.965 "trtype": "tcp", 00:30:27.965 "traddr": "127.0.0.1", 00:30:27.965 "adrfam": "ipv4", 00:30:27.965 "trsvcid": "4420", 00:30:27.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:27.965 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:27.965 "prchk_reftag": false, 00:30:27.965 "prchk_guard": false, 00:30:27.965 "hdgst": false, 00:30:27.965 "ddgst": false, 00:30:27.965 "psk": ":spdk-test:key1", 00:30:27.965 "method": "bdev_nvme_attach_controller", 00:30:27.965 "req_id": 1 00:30:27.965 } 00:30:27.965 Got JSON-RPC error response 00:30:27.965 response: 00:30:27.965 { 00:30:27.965 "code": -5, 00:30:27.965 "message": "Input/output error" 00:30:27.965 } 00:30:27.965 21:55:36 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:30:27.965 21:55:36 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:27.965 21:55:36 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:27.965 21:55:36 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@33 -- # sn=608985480 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 608985480 00:30:27.965 1 links removed 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@33 -- # sn=366185960 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 366185960 00:30:27.965 1 links removed 00:30:27.965 21:55:36 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3257276 00:30:27.965 21:55:36 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3257276 ']' 00:30:27.965 21:55:36 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3257276 00:30:27.965 21:55:36 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:27.965 21:55:36 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:27.965 21:55:36 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3257276 00:30:27.965 21:55:36 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:27.965 21:55:36 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:27.965 21:55:36 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3257276' 00:30:27.965 killing process with pid 3257276 00:30:27.965 21:55:36 keyring_linux -- common/autotest_common.sh@967 -- # kill 3257276 00:30:27.965 Received shutdown signal, test time was about 1.000000 seconds 00:30:27.965 00:30:27.965 Latency(us) 00:30:27.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.965 =================================================================================================================== 00:30:27.965 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:27.965 21:55:36 keyring_linux -- common/autotest_common.sh@972 -- # wait 3257276 00:30:28.224 21:55:36 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3257250 00:30:28.224 21:55:36 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3257250 ']' 00:30:28.224 21:55:36 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3257250 00:30:28.224 21:55:36 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:28.224 21:55:36 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:28.224 21:55:36 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3257250 00:30:28.224 21:55:36 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:28.224 21:55:36 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:28.224 21:55:36 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3257250' 00:30:28.224 killing process with pid 3257250 00:30:28.224 21:55:36 keyring_linux -- common/autotest_common.sh@967 -- # kill 3257250 00:30:28.224 21:55:36 keyring_linux -- common/autotest_common.sh@972 -- # wait 3257250 00:30:28.790 00:30:28.790 real 0m5.250s 00:30:28.790 user 0m9.234s 00:30:28.790 sys 0m1.173s 00:30:28.790 21:55:36 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:28.790 21:55:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:28.790 ************************************ 00:30:28.790 END TEST keyring_linux 00:30:28.790 ************************************ 00:30:28.790 21:55:36 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:28.790 21:55:36 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:28.790 21:55:36 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:28.790 21:55:36 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:30:28.790 21:55:36 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:30:28.790 21:55:36 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:28.790 21:55:36 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:28.790 21:55:36 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:28.790 21:55:36 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:28.790 21:55:36 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:28.790 21:55:36 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:28.790 21:55:36 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:28.790 21:55:36 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:28.790 21:55:36 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:28.790 21:55:36 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:28.790 21:55:36 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:28.790 21:55:36 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:28.790 21:55:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:28.790 21:55:36 -- common/autotest_common.sh@10 -- # set +x 00:30:28.790 21:55:36 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:28.790 21:55:36 -- common/autotest_common.sh@1390 -- # local autotest_es=0 00:30:28.790 21:55:36 -- common/autotest_common.sh@1391 -- # xtrace_disable 00:30:28.790 21:55:36 -- common/autotest_common.sh@10 -- # set +x 00:30:34.058 INFO: APP EXITING 00:30:34.058 INFO: killing all VMs 00:30:34.058 INFO: killing vhost app 00:30:34.058 INFO: EXIT DONE 00:30:35.435 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:30:35.435 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:30:35.435 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:30:35.435 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:30:35.435 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:30:35.435 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:30:35.435 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:30:35.435 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:30:35.694 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:30:35.694 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:30:35.694 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:30:35.694 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:30:35.694 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:30:35.694 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:30:35.694 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:30:35.694 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:30:35.694 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:30:38.226 Cleaning 00:30:38.226 Removing: /var/run/dpdk/spdk0/config 00:30:38.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:38.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:38.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:38.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:38.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:38.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:38.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:38.226 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:38.226 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:38.226 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:38.226 Removing: /var/run/dpdk/spdk1/config 00:30:38.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:38.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:38.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:38.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:38.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:38.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:38.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:38.226 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:38.226 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:38.226 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:38.226 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:38.226 Removing: /var/run/dpdk/spdk2/config 00:30:38.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:38.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:38.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:38.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:38.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:38.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:38.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:38.226 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:38.226 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:38.226 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:38.226 Removing: /var/run/dpdk/spdk3/config 00:30:38.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:38.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:38.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:38.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:38.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:38.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:38.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:38.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:38.226 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:38.226 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:38.226 Removing: /var/run/dpdk/spdk4/config 00:30:38.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:38.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:38.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:38.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:38.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:38.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:38.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:38.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:38.226 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:38.226 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:38.226 Removing: /dev/shm/bdev_svc_trace.1 00:30:38.226 Removing: /dev/shm/nvmf_trace.0 00:30:38.226 Removing: /dev/shm/spdk_tgt_trace.pid2872271 00:30:38.226 Removing: /var/run/dpdk/spdk0 00:30:38.226 Removing: /var/run/dpdk/spdk1 00:30:38.226 Removing: /var/run/dpdk/spdk2 00:30:38.226 Removing: /var/run/dpdk/spdk3 00:30:38.226 Removing: /var/run/dpdk/spdk4 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2869996 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2871205 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2872271 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2872906 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2873853 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2874093 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2875066 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2875241 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2875482 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2877383 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2878788 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2879142 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2879495 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2879801 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2880088 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2880348 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2880599 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2880871 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2881615 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2884600 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2884867 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2885136 00:30:38.226 Removing: /var/run/dpdk/spdk_pid2885355 00:30:38.227 Removing: /var/run/dpdk/spdk_pid2885632 00:30:38.227 Removing: /var/run/dpdk/spdk_pid2885862 00:30:38.227 Removing: /var/run/dpdk/spdk_pid2886356 00:30:38.227 Removing: /var/run/dpdk/spdk_pid2886364 00:30:38.227 Removing: /var/run/dpdk/spdk_pid2886732 00:30:38.227 Removing: /var/run/dpdk/spdk_pid2886866 00:30:38.227 Removing: /var/run/dpdk/spdk_pid2887118 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2887287 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2887704 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2887953 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2888244 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2888513 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2888632 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2888818 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2889065 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2889317 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2889566 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2889820 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2890069 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2890317 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2890571 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2890818 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2891063 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2891316 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2891564 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2891811 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2892066 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2892316 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2892568 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2892817 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2893065 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2893323 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2893569 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2893815 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2894031 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2894412 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2898084 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2902340 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2912362 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2913028 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2917385 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2917699 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2922207 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2928000 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2930815 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2941239 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2950285 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2951984 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2952901 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2970269 00:30:38.486 Removing: /var/run/dpdk/spdk_pid2974105 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3016912 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3022820 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3029026 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3035033 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3035035 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3035949 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3036664 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3037566 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3038257 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3038261 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3038495 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3038715 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3038725 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3039639 00:30:38.486 Removing: /var/run/dpdk/spdk_pid3040423 00:30:38.487 Removing: /var/run/dpdk/spdk_pid3041253 00:30:38.487 Removing: /var/run/dpdk/spdk_pid3041937 00:30:38.487 Removing: /var/run/dpdk/spdk_pid3041939 00:30:38.487 Removing: /var/run/dpdk/spdk_pid3042172 00:30:38.487 Removing: /var/run/dpdk/spdk_pid3043410 00:30:38.487 Removing: /var/run/dpdk/spdk_pid3044417 00:30:38.487 Removing: /var/run/dpdk/spdk_pid3052708 00:30:38.487 Removing: /var/run/dpdk/spdk_pid3077260 00:30:38.487 Removing: /var/run/dpdk/spdk_pid3081750 00:30:38.487 Removing: /var/run/dpdk/spdk_pid3083353 00:30:38.487 Removing: /var/run/dpdk/spdk_pid3085338 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3085496 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3085669 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3085905 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3086624 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3088470 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3089458 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3089959 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3092224 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3092782 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3093641 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3098191 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3108010 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3111905 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3117809 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3119114 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3120595 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3124990 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3129014 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3136401 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3136559 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3141078 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3141306 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3141536 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3141945 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3142000 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3146594 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3147122 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3151887 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3154571 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3160033 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3165368 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3173914 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3181127 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3181129 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3199477 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3200171 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3200669 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3201343 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3202318 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3202879 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3203492 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3204188 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3208435 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3208674 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3214531 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3214803 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3217025 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3224763 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3224768 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3229784 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3231754 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3233720 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3234876 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3237223 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3238527 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3247044 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3247501 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3248058 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3250436 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3250901 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3251367 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3255028 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3255188 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3256710 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3257250 00:30:38.746 Removing: /var/run/dpdk/spdk_pid3257276 00:30:38.746 Clean 00:30:39.005 21:55:46 -- common/autotest_common.sh@1449 -- # return 0 00:30:39.005 21:55:46 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:39.005 21:55:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:39.005 21:55:46 -- common/autotest_common.sh@10 -- # set +x 00:30:39.005 21:55:46 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:39.005 21:55:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:39.005 21:55:46 -- common/autotest_common.sh@10 -- # set +x 00:30:39.005 21:55:46 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:39.005 21:55:46 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:39.005 21:55:46 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:39.005 21:55:46 -- spdk/autotest.sh@391 -- # hash lcov 00:30:39.005 21:55:46 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:39.005 21:55:46 -- spdk/autotest.sh@393 -- # hostname 00:30:39.005 21:55:46 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:39.264 geninfo: WARNING: invalid characters removed from testname! 00:31:01.199 21:56:07 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:01.766 21:56:09 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:03.667 21:56:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:05.610 21:56:13 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:07.516 21:56:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:08.896 21:56:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:10.802 21:56:18 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:10.802 21:56:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.802 21:56:18 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:10.802 21:56:18 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.802 21:56:18 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.802 21:56:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.802 21:56:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.802 21:56:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.802 21:56:18 -- paths/export.sh@5 -- $ export PATH 00:31:10.802 21:56:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.802 21:56:18 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:10.802 21:56:18 -- common/autobuild_common.sh@447 -- $ date +%s 00:31:10.802 21:56:18 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721850978.XXXXXX 00:31:10.802 21:56:18 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721850978.PDrdLp 00:31:10.802 21:56:18 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:31:10.802 21:56:18 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:31:10.802 21:56:18 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:10.803 21:56:18 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:10.803 21:56:18 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:10.803 21:56:18 -- common/autobuild_common.sh@463 -- $ get_config_params 00:31:10.803 21:56:18 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:10.803 21:56:18 -- common/autotest_common.sh@10 -- $ set +x 00:31:10.803 21:56:18 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:31:10.803 21:56:18 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:31:10.803 21:56:18 -- pm/common@17 -- $ local monitor 00:31:10.803 21:56:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:10.803 21:56:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:10.803 21:56:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:10.803 21:56:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:10.803 21:56:18 -- pm/common@25 -- $ sleep 1 00:31:10.803 21:56:18 -- pm/common@21 -- $ date +%s 00:31:10.803 21:56:18 -- pm/common@21 -- $ date +%s 00:31:10.803 21:56:18 -- pm/common@21 -- $ date +%s 00:31:10.803 21:56:18 -- pm/common@21 -- $ date +%s 00:31:10.803 21:56:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721850978 00:31:10.803 21:56:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721850978 00:31:10.803 21:56:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721850978 00:31:10.803 21:56:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721850978 00:31:10.803 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721850978_collect-cpu-temp.pm.log 00:31:10.803 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721850978_collect-vmstat.pm.log 00:31:10.803 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721850978_collect-cpu-load.pm.log 00:31:10.803 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721850978_collect-bmc-pm.bmc.pm.log 00:31:11.741 21:56:19 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:31:11.741 21:56:19 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:31:11.741 21:56:19 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:11.741 21:56:19 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:11.741 21:56:19 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:11.741 21:56:19 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:11.742 21:56:19 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:11.742 21:56:19 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:11.742 21:56:19 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:11.742 21:56:19 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:11.742 21:56:19 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:11.742 21:56:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:11.742 21:56:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:11.742 21:56:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:11.742 21:56:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:11.742 21:56:19 -- pm/common@44 -- $ pid=3267210 00:31:11.742 21:56:19 -- pm/common@50 -- $ kill -TERM 3267210 00:31:11.742 21:56:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:11.742 21:56:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:11.742 21:56:19 -- pm/common@44 -- $ pid=3267211 00:31:11.742 21:56:19 -- pm/common@50 -- $ kill -TERM 3267211 00:31:11.742 21:56:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:11.742 21:56:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:11.742 21:56:19 -- pm/common@44 -- $ pid=3267213 00:31:11.742 21:56:19 -- pm/common@50 -- $ kill -TERM 3267213 00:31:11.742 21:56:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:11.742 21:56:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:11.742 21:56:19 -- pm/common@44 -- $ pid=3267239 00:31:11.742 21:56:19 -- pm/common@50 -- $ sudo -E kill -TERM 3267239 00:31:12.001 + [[ -n 2766354 ]] 00:31:12.001 + sudo kill 2766354 00:31:12.013 [Pipeline] } 00:31:12.034 [Pipeline] // stage 00:31:12.040 [Pipeline] } 00:31:12.060 [Pipeline] // timeout 00:31:12.066 [Pipeline] } 00:31:12.082 [Pipeline] // catchError 00:31:12.086 [Pipeline] } 00:31:12.106 [Pipeline] // wrap 00:31:12.113 [Pipeline] } 00:31:12.130 [Pipeline] // catchError 00:31:12.140 [Pipeline] stage 00:31:12.143 [Pipeline] { (Epilogue) 00:31:12.159 [Pipeline] catchError 00:31:12.160 [Pipeline] { 00:31:12.175 [Pipeline] echo 00:31:12.177 Cleanup processes 00:31:12.182 [Pipeline] sh 00:31:12.472 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:12.472 3267338 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:12.472 3267607 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:12.486 [Pipeline] sh 00:31:12.770 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:12.770 ++ grep -v 'sudo pgrep' 00:31:12.770 ++ awk '{print $1}' 00:31:12.770 + sudo kill -9 3267338 00:31:12.784 [Pipeline] sh 00:31:13.071 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:23.075 [Pipeline] sh 00:31:23.361 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:23.361 Artifacts sizes are good 00:31:23.374 [Pipeline] archiveArtifacts 00:31:23.380 Archiving artifacts 00:31:23.556 [Pipeline] sh 00:31:23.838 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:23.852 [Pipeline] cleanWs 00:31:23.862 [WS-CLEANUP] Deleting project workspace... 00:31:23.862 [WS-CLEANUP] Deferred wipeout is used... 00:31:23.869 [WS-CLEANUP] done 00:31:23.872 [Pipeline] } 00:31:23.893 [Pipeline] // catchError 00:31:23.906 [Pipeline] sh 00:31:24.194 + logger -p user.info -t JENKINS-CI 00:31:24.210 [Pipeline] } 00:31:24.227 [Pipeline] // stage 00:31:24.233 [Pipeline] } 00:31:24.250 [Pipeline] // node 00:31:24.255 [Pipeline] End of Pipeline 00:31:24.287 Finished: SUCCESS